I0428 23:37:35.415656 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0428 23:37:35.415857 7 e2e.go:124] Starting e2e run "067565bc-1640-414d-8e1c-5b736f74e3cc" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588117054 - Will randomize all specs Will run 275 of 4992 specs Apr 28 23:37:35.469: INFO: >>> kubeConfig: /root/.kube/config Apr 28 23:37:35.474: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 28 23:37:35.502: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 28 23:37:35.543: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 28 23:37:35.543: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 28 23:37:35.543: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 28 23:37:35.550: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 28 23:37:35.550: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 28 23:37:35.550: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 28 23:37:35.551: INFO: kube-apiserver version: v1.17.0 Apr 28 23:37:35.551: INFO: >>> kubeConfig: /root/.kube/config Apr 28 23:37:35.556: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:37:35.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Apr 28 23:37:35.627: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:37:39.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4171" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":46,"failed":0} ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:37:39.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:37:39.791: INFO: Creating deployment "webserver-deployment" Apr 28 23:37:39.800: INFO: Waiting for observed generation 1 Apr 28 23:37:42.094: INFO: Waiting for all required pods to come up Apr 28 23:37:42.098: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 28 23:37:50.108: INFO: Waiting for deployment "webserver-deployment" to complete Apr 28 23:37:50.113: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 28 23:37:50.118: INFO: Updating deployment webserver-deployment Apr 28 23:37:50.118: INFO: Waiting for observed generation 2 Apr 28 23:37:52.150: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 28 23:37:52.155: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 28 23:37:52.158: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 28 23:37:52.166: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 28 23:37:52.166: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 28 23:37:52.168: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 28 23:37:52.171: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 28 23:37:52.171: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 28 23:37:52.175: INFO: Updating deployment webserver-deployment Apr 28 23:37:52.175: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 28 23:37:52.234: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 28 23:37:52.357: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 28 23:37:52.542: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8520 /apis/apps/v1/namespaces/deployment-8520/deployments/webserver-deployment 795c2eda-1ce1-4f0f-8557-41a07c265a63 11835788 3 2020-04-28 23:37:39 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0026df058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-28 23:37:50 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-28 23:37:52 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 28 23:37:52.662: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8520 /apis/apps/v1/namespaces/deployment-8520/replicasets/webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 11835838 3 2020-04-28 23:37:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 795c2eda-1ce1-4f0f-8557-41a07c265a63 0xc0026df637 0xc0026df638}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0026df6a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 28 23:37:52.662: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 28 23:37:52.662: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8520 /apis/apps/v1/namespaces/deployment-8520/replicasets/webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 11835815 3 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 795c2eda-1ce1-4f0f-8557-41a07c265a63 0xc0026df527 0xc0026df528}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0026df588 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 28 23:37:52.713: INFO: Pod "webserver-deployment-595b5b9587-47vkx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-47vkx webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-47vkx 4a4eba89-459a-4ac0-a7bf-daed1a51b2c5 11835701 0 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0026dffc7 0xc0026dffc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.246,StartTime:2020-04-28 23:37:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 23:37:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eed9cdae68b5af520a49d37782a8eca5c4a5ab0aad534aec5fa7a82498696fe8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.714: INFO: Pod "webserver-deployment-595b5b9587-4fs5q" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4fs5q webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-4fs5q db088e97-1be1-41e1-8fa3-6b43e98a4246 11835698 0 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027fe237 0xc0027fe238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.248,StartTime:2020-04-28 23:37:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 23:37:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3c1bae51e2b985d1f4be476668ca5aa7a2501adc8dd0b56e26fbba67c1373914,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.714: INFO: Pod "webserver-deployment-595b5b9587-4hv5j" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4hv5j webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-4hv5j 2af559f9-a46d-4145-8f0d-cf148de648fb 11835819 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027fe577 0xc0027fe578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.714: INFO: Pod "webserver-deployment-595b5b9587-5f2tf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5f2tf webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-5f2tf 9ca0ffbf-453b-4146-9e00-3e5f86bfb268 11835805 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027fe7f7 0xc0027fe7f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.714: INFO: Pod "webserver-deployment-595b5b9587-7pnms" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7pnms webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-7pnms c7b24ca6-e739-43f1-9746-b3b473d50051 11835806 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027fea27 0xc0027fea28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.714: INFO: Pod "webserver-deployment-595b5b9587-8cw49" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8cw49 webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-8cw49 2cbc7817-8079-453e-9e5a-01a7268e19a9 11835648 0 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027feb57 0xc0027feb58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.243,StartTime:2020-04-28 23:37:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 23:37:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://71726e48d8aaa7ec28f317ad89b6eb888ec6c5a389d2c21c87deac3d65de1a85,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.715: INFO: Pod "webserver-deployment-595b5b9587-8r5cl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8r5cl webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-8r5cl 093c84a3-3890-4e84-8307-0f7fd9d0ad0b 11835655 0 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027fecd7 0xc0027fecd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.244,StartTime:2020-04-28 23:37:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 23:37:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://38ab7178c90fcd67a573562d3c61bc53f69e0ef2658c73f3db42eb2c27978132,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.715: INFO: Pod "webserver-deployment-595b5b9587-bf8sw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bf8sw webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-bf8sw a3020064-1b6d-40dc-a649-16b5b7e731c9 11835668 0 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027fee57 0xc0027fee58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.245,StartTime:2020-04-28 23:37:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 23:37:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dff15f3e6f5a6b2c8d2aed77e74c383097b5cdbcf4906ca92ab4ccce5acb1fff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.715: INFO: Pod "webserver-deployment-595b5b9587-bg2fk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bg2fk webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-bg2fk 50a403d0-938e-4ad6-b295-d9a38ce81c4b 11835814 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027fefd7 0xc0027fefd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.715: INFO: Pod "webserver-deployment-595b5b9587-ccnnh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ccnnh webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-ccnnh 2d10a3b2-e51a-4c3d-bb51-557747125a01 11835804 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ff0f7 0xc0027ff0f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.715: INFO: Pod "webserver-deployment-595b5b9587-lzzmr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lzzmr webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-lzzmr c146dd20-6576-4fa9-98a4-161326596c89 11835803 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ff217 0xc0027ff218}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.715: INFO: Pod "webserver-deployment-595b5b9587-mk27l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mk27l webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-mk27l 75fc7884-2903-4b02-a6ec-426bd6fc3e86 11835841 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ff337 0xc0027ff338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-28 23:37:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.716: INFO: Pod "webserver-deployment-595b5b9587-mrvpb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mrvpb webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-mrvpb 8077609c-9a75-49a7-b0c5-6973858b699b 11835695 0 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ff4a7 0xc0027ff4a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.247,StartTime:2020-04-28 23:37:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 23:37:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://de722b12d5b0c5cfc19c1823e61a4e07c82019ac5bae8039053f23b5bfcc7c73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.716: INFO: Pod "webserver-deployment-595b5b9587-q7h8f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q7h8f webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-q7h8f 22a8a4c8-546c-4c98-bdf0-c69c7c78066e 11835816 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ff837 0xc0027ff838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.716: INFO: Pod "webserver-deployment-595b5b9587-qd7np" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qd7np webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-qd7np 0017f7a3-c010-4418-b5ce-2cf9814e65a3 11835839 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ff957 0xc0027ff958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 23:37:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.716: INFO: Pod "webserver-deployment-595b5b9587-qtj55" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qtj55 webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-qtj55 65855d63-40e5-41af-b636-3908f47c89f4 11835658 0 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ffab7 0xc0027ffab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.244,StartTime:2020-04-28 23:37:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 23:37:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fab2b39ab89849888dc46bb8f52249f3bb5b10a57dbae31744a35a526eeaf2da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.716: INFO: Pod "webserver-deployment-595b5b9587-slb2k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-slb2k webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-slb2k 67438085-540b-41dd-af98-09ef63e70e42 11835691 0 2020-04-28 23:37:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ffc37 0xc0027ffc38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.247,StartTime:2020-04-28 23:37:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-28 23:37:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://176a5f90a78a794f89aa65c389add7859d40a97d699d0470dcc7dfee88ef8223,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.717: INFO: Pod "webserver-deployment-595b5b9587-slmw6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-slmw6 webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-slmw6 b08427ab-4979-4cd5-9129-c75e9c6cacb7 11835817 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ffdb7 0xc0027ffdb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.717: INFO: Pod "webserver-deployment-595b5b9587-wgwlf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wgwlf webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-wgwlf 198a839e-4448-4d05-b977-cb129a6c7a88 11835818 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ffed7 0xc0027ffed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.717: INFO: Pod "webserver-deployment-595b5b9587-x9zlq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x9zlq webserver-deployment-595b5b9587- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-595b5b9587-x9zlq bdc53700-a3c2-47bc-bf64-a1a2064d3fa3 11835846 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a8474387-70a9-4b91-821a-238a91e381bc 0xc0027ffff7 0xc0027ffff8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 23:37:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.717: INFO: Pod "webserver-deployment-c7997dcc8-44jvs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-44jvs webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-44jvs 4641cd1d-034e-43f3-b806-75a2f59e6fc5 11835787 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002970347 0xc002970348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.718: INFO: Pod "webserver-deployment-c7997dcc8-6txg6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6txg6 webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-6txg6 da3f14d9-64d8-4cdc-9cd6-534c08476285 11835809 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002970557 0xc002970558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.718: INFO: Pod "webserver-deployment-c7997dcc8-8949d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8949d webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-8949d 221ee094-2bee-4858-9abd-fd6a6b065125 11835744 0 2020-04-28 23:37:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002970807 0xc002970808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-28 23:37:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.718: INFO: Pod "webserver-deployment-c7997dcc8-d8vkd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d8vkd webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-d8vkd 294f2ecc-2c1f-430a-ad76-1b1e812c8d47 11835811 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc0029709f7 0xc0029709f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.718: INFO: Pod "webserver-deployment-c7997dcc8-hsbgd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hsbgd webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-hsbgd b7636155-a855-4c65-883e-16647ff937b7 11835813 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002970b27 0xc002970b28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.718: INFO: Pod "webserver-deployment-c7997dcc8-r7gp5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r7gp5 webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-r7gp5 f94a7287-275e-4194-b24a-0227da3c8764 11835790 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002970c57 0xc002970c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.719: INFO: Pod "webserver-deployment-c7997dcc8-sfqkb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sfqkb webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-sfqkb 20320d6e-c538-46f5-aa7d-0c9a4a8dd7b1 11835812 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002970d87 0xc002970d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.719: INFO: Pod "webserver-deployment-c7997dcc8-sgqgs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sgqgs webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-sgqgs de380614-6799-4f89-85a7-3b8129c8a8bc 11835753 0 2020-04-28 23:37:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002970eb7 0xc002970eb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 23:37:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.719: INFO: Pod "webserver-deployment-c7997dcc8-tljsg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tljsg webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-tljsg 5bd2aabf-f531-4351-af16-a356b735ddd5 11835832 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002971137 0xc002971138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.720: INFO: Pod "webserver-deployment-c7997dcc8-v4xbl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v4xbl webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-v4xbl 1b5eb50c-66c0-440d-ba0b-9d67521e60bc 11835728 0 2020-04-28 23:37:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002971337 0xc002971338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-28 23:37:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.720: INFO: Pod "webserver-deployment-c7997dcc8-vwbfv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vwbfv webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-vwbfv dd6c15e0-4c15-4575-9a6d-16e0ece7d56b 11835729 0 2020-04-28 23:37:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002971627 0xc002971628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 23:37:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.720: INFO: Pod "webserver-deployment-c7997dcc8-xgcvl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xgcvl webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-xgcvl dcf4de64-fd8e-4a0f-b192-114c7cefbe76 11835789 0 2020-04-28 23:37:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002971a27 0xc002971a28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:37:52.720: INFO: Pod "webserver-deployment-c7997dcc8-xgk7t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xgk7t webserver-deployment-c7997dcc8- deployment-8520 /api/v1/namespaces/deployment-8520/pods/webserver-deployment-c7997dcc8-xgk7t 9ad90122-9a66-44f3-ac12-5f91f32e1eb9 11835755 0 2020-04-28 23:37:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 58a546c5-1040-46e2-9705-dfefcd4d7436 0xc002971be7 0xc002971be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d4459,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d4459,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d4459,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-28 23:37:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-28 23:37:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:37:52.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8520" for this suite. • [SLOW TEST:13.369 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":2,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:37:53.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:38:11.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6346" for this suite. • [SLOW TEST:18.856 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":3,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:38:11.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 28 23:38:12.378: INFO: Waiting up to 5m0s for pod "pod-5d32ffd0-baf0-4ddc-9665-01fecfbf79d1" in namespace "emptydir-5616" to be "Succeeded or Failed" Apr 28 23:38:12.414: INFO: Pod "pod-5d32ffd0-baf0-4ddc-9665-01fecfbf79d1": Phase="Pending", Reason="", readiness=false. Elapsed: 35.846695ms Apr 28 23:38:14.419: INFO: Pod "pod-5d32ffd0-baf0-4ddc-9665-01fecfbf79d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040234365s Apr 28 23:38:16.423: INFO: Pod "pod-5d32ffd0-baf0-4ddc-9665-01fecfbf79d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044602754s STEP: Saw pod success Apr 28 23:38:16.423: INFO: Pod "pod-5d32ffd0-baf0-4ddc-9665-01fecfbf79d1" satisfied condition "Succeeded or Failed" Apr 28 23:38:16.426: INFO: Trying to get logs from node latest-worker pod pod-5d32ffd0-baf0-4ddc-9665-01fecfbf79d1 container test-container: STEP: delete the pod Apr 28 23:38:16.483: INFO: Waiting for pod pod-5d32ffd0-baf0-4ddc-9665-01fecfbf79d1 to disappear Apr 28 23:38:16.516: INFO: Pod pod-5d32ffd0-baf0-4ddc-9665-01fecfbf79d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:38:16.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5616" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:38:16.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 23:38:16.594: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fce965f-5196-444e-a68a-2355073a2c66" in namespace "projected-3687" to be "Succeeded or Failed" Apr 28 23:38:16.598: INFO: Pod "downwardapi-volume-6fce965f-5196-444e-a68a-2355073a2c66": Phase="Pending", Reason="", readiness=false. Elapsed: 3.802413ms Apr 28 23:38:18.603: INFO: Pod "downwardapi-volume-6fce965f-5196-444e-a68a-2355073a2c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008259945s Apr 28 23:38:20.607: INFO: Pod "downwardapi-volume-6fce965f-5196-444e-a68a-2355073a2c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012473777s STEP: Saw pod success Apr 28 23:38:20.607: INFO: Pod "downwardapi-volume-6fce965f-5196-444e-a68a-2355073a2c66" satisfied condition "Succeeded or Failed" Apr 28 23:38:20.610: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6fce965f-5196-444e-a68a-2355073a2c66 container client-container: STEP: delete the pod Apr 28 23:38:20.673: INFO: Waiting for pod downwardapi-volume-6fce965f-5196-444e-a68a-2355073a2c66 to disappear Apr 28 23:38:20.682: INFO: Pod downwardapi-volume-6fce965f-5196-444e-a68a-2355073a2c66 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:38:20.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3687" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":142,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:38:20.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 28 23:38:20.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5008' Apr 28 23:38:23.402: INFO: stderr: "" Apr 28 23:38:23.402: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 23:38:23.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5008' Apr 28 23:38:23.531: INFO: stderr: "" Apr 28 23:38:23.531: INFO: stdout: "update-demo-nautilus-2vtnz update-demo-nautilus-hrkv4 " Apr 28 23:38:23.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vtnz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5008' Apr 28 23:38:23.618: INFO: stderr: "" Apr 28 23:38:23.618: INFO: stdout: "" Apr 28 23:38:23.618: INFO: update-demo-nautilus-2vtnz is created but not running Apr 28 23:38:28.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5008' Apr 28 23:38:28.705: INFO: stderr: "" Apr 28 23:38:28.705: INFO: stdout: "update-demo-nautilus-2vtnz update-demo-nautilus-hrkv4 " Apr 28 23:38:28.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vtnz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5008' Apr 28 23:38:28.800: INFO: stderr: "" Apr 28 23:38:28.800: INFO: stdout: "true" Apr 28 23:38:28.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vtnz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5008' Apr 28 23:38:28.887: INFO: stderr: "" Apr 28 23:38:28.887: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 23:38:28.887: INFO: validating pod update-demo-nautilus-2vtnz Apr 28 23:38:28.890: INFO: got data: { "image": "nautilus.jpg" } Apr 28 23:38:28.890: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 23:38:28.890: INFO: update-demo-nautilus-2vtnz is verified up and running Apr 28 23:38:28.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrkv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5008' Apr 28 23:38:28.992: INFO: stderr: "" Apr 28 23:38:28.992: INFO: stdout: "true" Apr 28 23:38:28.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrkv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5008' Apr 28 23:38:29.086: INFO: stderr: "" Apr 28 23:38:29.086: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 23:38:29.086: INFO: validating pod update-demo-nautilus-hrkv4 Apr 28 23:38:29.090: INFO: got data: { "image": "nautilus.jpg" } Apr 28 23:38:29.090: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 23:38:29.090: INFO: update-demo-nautilus-hrkv4 is verified up and running STEP: using delete to clean up resources Apr 28 23:38:29.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5008' Apr 28 23:38:29.190: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 23:38:29.190: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 28 23:38:29.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5008' Apr 28 23:38:29.283: INFO: stderr: "No resources found in kubectl-5008 namespace.\n" Apr 28 23:38:29.283: INFO: stdout: "" Apr 28 23:38:29.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5008 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 23:38:29.376: INFO: stderr: "" Apr 28 23:38:29.376: INFO: stdout: "update-demo-nautilus-2vtnz\nupdate-demo-nautilus-hrkv4\n" Apr 28 23:38:29.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5008' Apr 28 23:38:29.984: INFO: stderr: "No resources found in kubectl-5008 namespace.\n" Apr 28 23:38:29.984: INFO: stdout: "" Apr 28 23:38:29.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5008 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 23:38:30.099: INFO: stderr: "" Apr 28 23:38:30.099: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:38:30.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5008" for this suite. • [SLOW TEST:9.420 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":6,"skipped":143,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:38:30.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 28 23:38:30.534: INFO: Waiting up to 5m0s for pod "downward-api-7b65416c-57b5-4042-9d80-dacb8315fa2e" in namespace "downward-api-1693" to be "Succeeded or Failed" Apr 28 23:38:30.546: INFO: Pod "downward-api-7b65416c-57b5-4042-9d80-dacb8315fa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.509185ms Apr 28 23:38:32.607: INFO: Pod "downward-api-7b65416c-57b5-4042-9d80-dacb8315fa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072883074s Apr 28 23:38:34.625: INFO: Pod "downward-api-7b65416c-57b5-4042-9d80-dacb8315fa2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091399003s STEP: Saw pod success Apr 28 23:38:34.625: INFO: Pod "downward-api-7b65416c-57b5-4042-9d80-dacb8315fa2e" satisfied condition "Succeeded or Failed" Apr 28 23:38:34.628: INFO: Trying to get logs from node latest-worker2 pod downward-api-7b65416c-57b5-4042-9d80-dacb8315fa2e container dapi-container: STEP: delete the pod Apr 28 23:38:34.656: INFO: Waiting for pod downward-api-7b65416c-57b5-4042-9d80-dacb8315fa2e to disappear Apr 28 23:38:34.660: INFO: Pod downward-api-7b65416c-57b5-4042-9d80-dacb8315fa2e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:38:34.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1693" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":163,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:38:34.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:38:34.776: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Pending, waiting for it to be Running (with Ready = true) Apr 28 23:38:36.780: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Pending, waiting for it to be Running (with Ready = true) Apr 28 23:38:38.779: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:40.791: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:42.797: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:44.781: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:46.781: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:48.781: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:50.781: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:52.781: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:54.781: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:56.779: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = false) Apr 28 23:38:58.780: INFO: The status of Pod test-webserver-33bb9e66-6652-4b5b-8fcd-c12208a8cdb9 is Running (Ready = true) Apr 28 23:38:58.783: INFO: Container started at 2020-04-28 23:38:36 +0000 UTC, pod became ready at 2020-04-28 23:38:58 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:38:58.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-732" for this suite. • [SLOW TEST:24.125 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":166,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:38:58.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:38:58.855: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-94df7956-46b5-4b4c-bec7-30ccbe9e2ba7" in namespace "security-context-test-3249" to be "Succeeded or Failed" Apr 28 23:38:58.893: INFO: Pod "busybox-privileged-false-94df7956-46b5-4b4c-bec7-30ccbe9e2ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.61811ms Apr 28 23:39:00.897: INFO: Pod "busybox-privileged-false-94df7956-46b5-4b4c-bec7-30ccbe9e2ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042563807s Apr 28 23:39:02.901: INFO: Pod "busybox-privileged-false-94df7956-46b5-4b4c-bec7-30ccbe9e2ba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046868337s Apr 28 23:39:02.902: INFO: Pod "busybox-privileged-false-94df7956-46b5-4b4c-bec7-30ccbe9e2ba7" satisfied condition "Succeeded or Failed" Apr 28 23:39:02.908: INFO: Got logs for pod "busybox-privileged-false-94df7956-46b5-4b4c-bec7-30ccbe9e2ba7": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:39:02.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3249" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":187,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:39:02.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 28 23:39:03.225: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:39:09.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8964" for this suite. • [SLOW TEST:6.460 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":10,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:39:09.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 28 23:39:14.018: INFO: Successfully updated pod "pod-update-ff2291de-4b97-4ba0-9681-4aa1ef9ac064" STEP: verifying the updated pod is in kubernetes Apr 28 23:39:14.034: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:39:14.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-876" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:39:14.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 23:39:14.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42a112b6-133d-4e9c-8d83-257c5170d27b" in namespace "downward-api-3699" to be "Succeeded or Failed" Apr 28 23:39:14.132: INFO: Pod "downwardapi-volume-42a112b6-133d-4e9c-8d83-257c5170d27b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.414522ms Apr 28 23:39:16.136: INFO: Pod "downwardapi-volume-42a112b6-133d-4e9c-8d83-257c5170d27b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013284578s Apr 28 23:39:18.141: INFO: Pod "downwardapi-volume-42a112b6-133d-4e9c-8d83-257c5170d27b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017672533s STEP: Saw pod success Apr 28 23:39:18.141: INFO: Pod "downwardapi-volume-42a112b6-133d-4e9c-8d83-257c5170d27b" satisfied condition "Succeeded or Failed" Apr 28 23:39:18.144: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-42a112b6-133d-4e9c-8d83-257c5170d27b container client-container: STEP: delete the pod Apr 28 23:39:18.173: INFO: Waiting for pod downwardapi-volume-42a112b6-133d-4e9c-8d83-257c5170d27b to disappear Apr 28 23:39:18.183: INFO: Pod downwardapi-volume-42a112b6-133d-4e9c-8d83-257c5170d27b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:39:18.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3699" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":237,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:39:18.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 28 23:39:18.773: INFO: created pod pod-service-account-defaultsa Apr 28 23:39:18.773: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 28 23:39:18.782: INFO: created pod pod-service-account-mountsa Apr 28 23:39:18.782: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 28 23:39:18.791: INFO: created pod pod-service-account-nomountsa Apr 28 23:39:18.791: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 28 23:39:18.833: INFO: created pod pod-service-account-defaultsa-mountspec Apr 28 23:39:18.833: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 28 23:39:18.848: INFO: created pod pod-service-account-mountsa-mountspec Apr 28 23:39:18.848: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 28 23:39:18.907: INFO: created pod pod-service-account-nomountsa-mountspec Apr 28 23:39:18.907: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 28 23:39:18.927: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 28 23:39:18.927: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 28 23:39:18.970: INFO: created pod pod-service-account-mountsa-nomountspec Apr 28 23:39:18.970: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 28 23:39:19.050: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 28 23:39:19.050: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:39:19.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-366" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":13,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:39:19.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2169 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2169 I0428 23:39:19.420320 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2169, replica count: 2 I0428 23:39:22.470703 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 23:39:25.470930 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 23:39:28.471132 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 23:39:31.471448 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 23:39:31.471: INFO: Creating new exec pod Apr 28 23:39:36.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2169 execpodhhptg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 28 23:39:36.780: INFO: stderr: "I0428 23:39:36.687640 313 log.go:172] (0xc00003a630) (0xc000974000) Create stream\nI0428 23:39:36.687711 313 log.go:172] (0xc00003a630) (0xc000974000) Stream added, broadcasting: 1\nI0428 23:39:36.690754 313 log.go:172] (0xc00003a630) Reply frame received for 1\nI0428 23:39:36.690820 313 log.go:172] (0xc00003a630) (0xc0009740a0) Create stream\nI0428 23:39:36.690832 313 log.go:172] (0xc00003a630) (0xc0009740a0) Stream added, broadcasting: 3\nI0428 23:39:36.691795 313 log.go:172] (0xc00003a630) Reply frame received for 3\nI0428 23:39:36.691823 313 log.go:172] (0xc00003a630) (0xc000974140) Create stream\nI0428 23:39:36.691832 313 log.go:172] (0xc00003a630) (0xc000974140) Stream added, broadcasting: 5\nI0428 23:39:36.692706 313 log.go:172] (0xc00003a630) Reply frame received for 5\nI0428 23:39:36.773823 313 log.go:172] (0xc00003a630) Data frame received for 5\nI0428 23:39:36.773855 313 log.go:172] (0xc000974140) (5) Data frame handling\nI0428 23:39:36.773871 313 log.go:172] (0xc000974140) (5) Data frame sent\nI0428 23:39:36.773884 313 log.go:172] (0xc00003a630) Data frame received for 5\nI0428 23:39:36.773894 313 log.go:172] (0xc000974140) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0428 23:39:36.774007 313 log.go:172] (0xc000974140) (5) Data frame sent\nI0428 23:39:36.774151 313 log.go:172] (0xc00003a630) Data frame received for 5\nI0428 23:39:36.774178 313 log.go:172] (0xc000974140) (5) Data frame handling\nI0428 23:39:36.774199 313 log.go:172] (0xc00003a630) Data frame received for 3\nI0428 23:39:36.774211 313 log.go:172] (0xc0009740a0) (3) Data frame handling\nI0428 23:39:36.776082 313 log.go:172] (0xc00003a630) Data frame received for 1\nI0428 23:39:36.776102 313 log.go:172] (0xc000974000) (1) Data frame handling\nI0428 23:39:36.776119 313 log.go:172] (0xc000974000) (1) Data frame sent\nI0428 23:39:36.776131 313 log.go:172] (0xc00003a630) (0xc000974000) Stream removed, broadcasting: 1\nI0428 23:39:36.776163 313 log.go:172] (0xc00003a630) Go away received\nI0428 23:39:36.776361 313 log.go:172] (0xc00003a630) (0xc000974000) Stream removed, broadcasting: 1\nI0428 23:39:36.776374 313 log.go:172] (0xc00003a630) (0xc0009740a0) Stream removed, broadcasting: 3\nI0428 23:39:36.776380 313 log.go:172] (0xc00003a630) (0xc000974140) Stream removed, broadcasting: 5\n" Apr 28 23:39:36.781: INFO: stdout: "" Apr 28 23:39:36.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2169 execpodhhptg -- /bin/sh -x -c nc -zv -t -w 2 10.96.211.155 80' Apr 28 23:39:36.986: INFO: stderr: "I0428 23:39:36.916654 333 log.go:172] (0xc000a7b6b0) (0xc000a9e6e0) Create stream\nI0428 23:39:36.916722 333 log.go:172] (0xc000a7b6b0) (0xc000a9e6e0) Stream added, broadcasting: 1\nI0428 23:39:36.921505 333 log.go:172] (0xc000a7b6b0) Reply frame received for 1\nI0428 23:39:36.921542 333 log.go:172] (0xc000a7b6b0) (0xc0006855e0) Create stream\nI0428 23:39:36.921553 333 log.go:172] (0xc000a7b6b0) (0xc0006855e0) Stream added, broadcasting: 3\nI0428 23:39:36.922432 333 log.go:172] (0xc000a7b6b0) Reply frame received for 3\nI0428 23:39:36.922475 333 log.go:172] (0xc000a7b6b0) (0xc000554a00) Create stream\nI0428 23:39:36.922491 333 log.go:172] (0xc000a7b6b0) (0xc000554a00) Stream added, broadcasting: 5\nI0428 23:39:36.923353 333 log.go:172] (0xc000a7b6b0) Reply frame received for 5\nI0428 23:39:36.980903 333 log.go:172] (0xc000a7b6b0) Data frame received for 3\nI0428 23:39:36.980970 333 log.go:172] (0xc0006855e0) (3) Data frame handling\nI0428 23:39:36.981004 333 log.go:172] (0xc000a7b6b0) Data frame received for 5\nI0428 23:39:36.981029 333 log.go:172] (0xc000554a00) (5) Data frame handling\nI0428 23:39:36.981054 333 log.go:172] (0xc000554a00) (5) Data frame sent\nI0428 23:39:36.981072 333 log.go:172] (0xc000a7b6b0) Data frame received for 5\nI0428 23:39:36.981083 333 log.go:172] (0xc000554a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.211.155 80\nConnection to 10.96.211.155 80 port [tcp/http] succeeded!\nI0428 23:39:36.982753 333 log.go:172] (0xc000a7b6b0) Data frame received for 1\nI0428 23:39:36.982770 333 log.go:172] (0xc000a9e6e0) (1) Data frame handling\nI0428 23:39:36.982777 333 log.go:172] (0xc000a9e6e0) (1) Data frame sent\nI0428 23:39:36.982784 333 log.go:172] (0xc000a7b6b0) (0xc000a9e6e0) Stream removed, broadcasting: 1\nI0428 23:39:36.982873 333 log.go:172] (0xc000a7b6b0) Go away received\nI0428 23:39:36.983088 333 log.go:172] (0xc000a7b6b0) (0xc000a9e6e0) Stream removed, broadcasting: 1\nI0428 23:39:36.983107 333 log.go:172] (0xc000a7b6b0) (0xc0006855e0) Stream removed, broadcasting: 3\nI0428 23:39:36.983114 333 log.go:172] (0xc000a7b6b0) (0xc000554a00) Stream removed, broadcasting: 5\n" Apr 28 23:39:36.986: INFO: stdout: "" Apr 28 23:39:36.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2169 execpodhhptg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31275' Apr 28 23:39:37.195: INFO: stderr: "I0428 23:39:37.128842 354 log.go:172] (0xc00087ea50) (0xc000732280) Create stream\nI0428 23:39:37.128913 354 log.go:172] (0xc00087ea50) (0xc000732280) Stream added, broadcasting: 1\nI0428 23:39:37.142290 354 log.go:172] (0xc00087ea50) Reply frame received for 1\nI0428 23:39:37.142342 354 log.go:172] (0xc00087ea50) (0xc000942000) Create stream\nI0428 23:39:37.142355 354 log.go:172] (0xc00087ea50) (0xc000942000) Stream added, broadcasting: 3\nI0428 23:39:37.143044 354 log.go:172] (0xc00087ea50) Reply frame received for 3\nI0428 23:39:37.143078 354 log.go:172] (0xc00087ea50) (0xc000732320) Create stream\nI0428 23:39:37.143085 354 log.go:172] (0xc00087ea50) (0xc000732320) Stream added, broadcasting: 5\nI0428 23:39:37.143764 354 log.go:172] (0xc00087ea50) Reply frame received for 5\nI0428 23:39:37.187181 354 log.go:172] (0xc00087ea50) Data frame received for 3\nI0428 23:39:37.187222 354 log.go:172] (0xc000942000) (3) Data frame handling\nI0428 23:39:37.187251 354 log.go:172] (0xc00087ea50) Data frame received for 5\nI0428 23:39:37.187263 354 log.go:172] (0xc000732320) (5) Data frame handling\nI0428 23:39:37.187275 354 log.go:172] (0xc000732320) (5) Data frame sent\nI0428 23:39:37.187288 354 log.go:172] (0xc00087ea50) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.13 31275\nConnection to 172.17.0.13 31275 port [tcp/31275] succeeded!\nI0428 23:39:37.187310 354 log.go:172] (0xc000732320) (5) Data frame handling\nI0428 23:39:37.188890 354 log.go:172] (0xc00087ea50) Data frame received for 1\nI0428 23:39:37.188915 354 log.go:172] (0xc000732280) (1) Data frame handling\nI0428 23:39:37.188948 354 log.go:172] (0xc000732280) (1) Data frame sent\nI0428 23:39:37.188976 354 log.go:172] (0xc00087ea50) (0xc000732280) Stream removed, broadcasting: 1\nI0428 23:39:37.189521 354 log.go:172] (0xc00087ea50) Go away received\nI0428 23:39:37.189561 354 log.go:172] (0xc00087ea50) (0xc000732280) Stream removed, broadcasting: 1\nI0428 23:39:37.189693 354 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc000942000), 0x5:(*spdystream.Stream)(0xc000732320)}\nI0428 23:39:37.189740 354 log.go:172] (0xc00087ea50) (0xc000942000) Stream removed, broadcasting: 3\nI0428 23:39:37.189761 354 log.go:172] (0xc00087ea50) (0xc000732320) Stream removed, broadcasting: 5\n" Apr 28 23:39:37.195: INFO: stdout: "" Apr 28 23:39:37.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2169 execpodhhptg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31275' Apr 28 23:39:37.454: INFO: stderr: "I0428 23:39:37.321442 377 log.go:172] (0xc0000e84d0) (0xc0004e4b40) Create stream\nI0428 23:39:37.321492 377 log.go:172] (0xc0000e84d0) (0xc0004e4b40) Stream added, broadcasting: 1\nI0428 23:39:37.323618 377 log.go:172] (0xc0000e84d0) Reply frame received for 1\nI0428 23:39:37.323657 377 log.go:172] (0xc0000e84d0) (0xc00093a000) Create stream\nI0428 23:39:37.323670 377 log.go:172] (0xc0000e84d0) (0xc00093a000) Stream added, broadcasting: 3\nI0428 23:39:37.324379 377 log.go:172] (0xc0000e84d0) Reply frame received for 3\nI0428 23:39:37.324423 377 log.go:172] (0xc0000e84d0) (0xc0006f12c0) Create stream\nI0428 23:39:37.324441 377 log.go:172] (0xc0000e84d0) (0xc0006f12c0) Stream added, broadcasting: 5\nI0428 23:39:37.325297 377 log.go:172] (0xc0000e84d0) Reply frame received for 5\nI0428 23:39:37.448417 377 log.go:172] (0xc0000e84d0) Data frame received for 5\nI0428 23:39:37.448437 377 log.go:172] (0xc0006f12c0) (5) Data frame handling\nI0428 23:39:37.448449 377 log.go:172] (0xc0006f12c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31275\nConnection to 172.17.0.12 31275 port [tcp/31275] succeeded!\nI0428 23:39:37.448536 377 log.go:172] (0xc0000e84d0) Data frame received for 3\nI0428 23:39:37.448558 377 log.go:172] (0xc00093a000) (3) Data frame handling\nI0428 23:39:37.448580 377 log.go:172] (0xc0000e84d0) Data frame received for 5\nI0428 23:39:37.448593 377 log.go:172] (0xc0006f12c0) (5) Data frame handling\nI0428 23:39:37.450429 377 log.go:172] (0xc0000e84d0) Data frame received for 1\nI0428 23:39:37.450442 377 log.go:172] (0xc0004e4b40) (1) Data frame handling\nI0428 23:39:37.450450 377 log.go:172] (0xc0004e4b40) (1) Data frame sent\nI0428 23:39:37.450459 377 log.go:172] (0xc0000e84d0) (0xc0004e4b40) Stream removed, broadcasting: 1\nI0428 23:39:37.450492 377 log.go:172] (0xc0000e84d0) Go away received\nI0428 23:39:37.450753 377 log.go:172] (0xc0000e84d0) (0xc0004e4b40) Stream removed, broadcasting: 1\nI0428 23:39:37.450770 377 log.go:172] (0xc0000e84d0) (0xc00093a000) Stream removed, broadcasting: 3\nI0428 23:39:37.450781 377 log.go:172] (0xc0000e84d0) (0xc0006f12c0) Stream removed, broadcasting: 5\n" Apr 28 23:39:37.454: INFO: stdout: "" Apr 28 23:39:37.454: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:39:37.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2169" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:18.392 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":14,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:39:37.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:39:48.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8136" for this suite. • [SLOW TEST:11.172 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":15,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:39:48.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 28 23:39:49.059: INFO: Waiting up to 5m0s for pod "pod-2c8990b8-ae67-4b28-ba4d-b3216b9cd5a3" in namespace "emptydir-5723" to be "Succeeded or Failed" Apr 28 23:39:49.115: INFO: Pod "pod-2c8990b8-ae67-4b28-ba4d-b3216b9cd5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 55.568765ms Apr 28 23:39:51.119: INFO: Pod "pod-2c8990b8-ae67-4b28-ba4d-b3216b9cd5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059509922s Apr 28 23:39:53.122: INFO: Pod "pod-2c8990b8-ae67-4b28-ba4d-b3216b9cd5a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063020416s STEP: Saw pod success Apr 28 23:39:53.122: INFO: Pod "pod-2c8990b8-ae67-4b28-ba4d-b3216b9cd5a3" satisfied condition "Succeeded or Failed" Apr 28 23:39:53.124: INFO: Trying to get logs from node latest-worker2 pod pod-2c8990b8-ae67-4b28-ba4d-b3216b9cd5a3 container test-container: STEP: delete the pod Apr 28 23:39:53.143: INFO: Waiting for pod pod-2c8990b8-ae67-4b28-ba4d-b3216b9cd5a3 to disappear Apr 28 23:39:53.148: INFO: Pod pod-2c8990b8-ae67-4b28-ba4d-b3216b9cd5a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:39:53.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5723" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":325,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:39:53.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2309 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-2309 Apr 28 23:39:53.355: INFO: Found 0 stateful pods, waiting for 1 Apr 28 23:40:03.359: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 23:40:03.382: INFO: Deleting all statefulset in ns statefulset-2309 Apr 28 23:40:03.388: INFO: Scaling statefulset ss to 0 Apr 28 23:40:23.442: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 23:40:23.445: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:40:23.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2309" for this suite. • [SLOW TEST:30.331 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":17,"skipped":331,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:40:23.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:40:27.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1987" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":339,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:40:27.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 28 23:40:27.660: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 23:40:27.686: INFO: Waiting for terminating namespaces to be deleted... Apr 28 23:40:27.688: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 28 23:40:27.693: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 23:40:27.693: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 23:40:27.693: INFO: client-containers-d022d704-fe4a-4374-92bd-f53c17e70c67 from containers-1987 started at 2020-04-28 23:40:23 +0000 UTC (1 container statuses recorded) Apr 28 23:40:27.693: INFO: Container test-container ready: true, restart count 0 Apr 28 23:40:27.693: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 23:40:27.693: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 23:40:27.693: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 28 23:40:27.698: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 23:40:27.698: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 23:40:27.698: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 28 23:40:27.698: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3bce6e9f-550a-42ee-8bbf-4385d83d87ea 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-3bce6e9f-550a-42ee-8bbf-4385d83d87ea off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3bce6e9f-550a-42ee-8bbf-4385d83d87ea [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:45:35.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-89" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.349 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":19,"skipped":351,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:45:35.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-19c95e78-0244-4560-814a-1ce187630204 STEP: Creating a pod to test consume configMaps Apr 28 23:45:36.016: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be2baa8c-8d08-47a1-8aac-0ecbcc371a62" in namespace "projected-7088" to be "Succeeded or Failed" Apr 28 23:45:36.036: INFO: Pod "pod-projected-configmaps-be2baa8c-8d08-47a1-8aac-0ecbcc371a62": Phase="Pending", Reason="", readiness=false. Elapsed: 19.97364ms Apr 28 23:45:38.055: INFO: Pod "pod-projected-configmaps-be2baa8c-8d08-47a1-8aac-0ecbcc371a62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039257838s Apr 28 23:45:40.059: INFO: Pod "pod-projected-configmaps-be2baa8c-8d08-47a1-8aac-0ecbcc371a62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043359774s STEP: Saw pod success Apr 28 23:45:40.059: INFO: Pod "pod-projected-configmaps-be2baa8c-8d08-47a1-8aac-0ecbcc371a62" satisfied condition "Succeeded or Failed" Apr 28 23:45:40.062: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-be2baa8c-8d08-47a1-8aac-0ecbcc371a62 container projected-configmap-volume-test: STEP: delete the pod Apr 28 23:45:40.091: INFO: Waiting for pod pod-projected-configmaps-be2baa8c-8d08-47a1-8aac-0ecbcc371a62 to disappear Apr 28 23:45:40.096: INFO: Pod pod-projected-configmaps-be2baa8c-8d08-47a1-8aac-0ecbcc371a62 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:45:40.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7088" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:45:40.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5250 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 28 23:45:40.318: INFO: Found 0 stateful pods, waiting for 3 Apr 28 23:45:50.323: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 23:45:50.323: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 23:45:50.323: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 28 23:46:00.322: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 23:46:00.322: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 23:46:00.322: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 28 23:46:00.348: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 28 23:46:10.428: INFO: Updating stateful set ss2 Apr 28 23:46:10.493: INFO: Waiting for Pod statefulset-5250/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 28 23:46:20.847: INFO: Found 2 stateful pods, waiting for 3 Apr 28 23:46:30.852: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 23:46:30.852: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 23:46:30.852: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 28 23:46:30.876: INFO: Updating stateful set ss2 Apr 28 23:46:30.900: INFO: Waiting for Pod statefulset-5250/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 28 23:46:40.927: INFO: Updating stateful set ss2 Apr 28 23:46:40.988: INFO: Waiting for StatefulSet statefulset-5250/ss2 to complete update Apr 28 23:46:40.988: INFO: Waiting for Pod statefulset-5250/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 23:46:50.993: INFO: Deleting all statefulset in ns statefulset-5250 Apr 28 23:46:50.995: INFO: Scaling statefulset ss2 to 0 Apr 28 23:47:21.006: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 23:47:21.009: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:47:21.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5250" for this suite. • [SLOW TEST:100.932 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":21,"skipped":374,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:47:21.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 23:47:21.428: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 23:47:23.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714441, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714441, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714441, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714441, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 23:47:26.475: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:47:27.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-745" for this suite. STEP: Destroying namespace "webhook-745-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.209 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":22,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:47:27.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 23:47:28.233: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 23:47:30.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714448, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714448, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714448, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714448, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 23:47:33.284: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:47:33.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3523" for this suite. STEP: Destroying namespace "webhook-3523-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.234 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":23,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:47:33.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 28 23:47:33.624: INFO: Waiting up to 5m0s for pod "pod-ac17ff05-2182-40a4-b56a-5637e678c8df" in namespace "emptydir-5803" to be "Succeeded or Failed" Apr 28 23:47:33.841: INFO: Pod "pod-ac17ff05-2182-40a4-b56a-5637e678c8df": Phase="Pending", Reason="", readiness=false. Elapsed: 216.694812ms Apr 28 23:47:36.008: INFO: Pod "pod-ac17ff05-2182-40a4-b56a-5637e678c8df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384049433s Apr 28 23:47:38.012: INFO: Pod "pod-ac17ff05-2182-40a4-b56a-5637e678c8df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.387434062s STEP: Saw pod success Apr 28 23:47:38.012: INFO: Pod "pod-ac17ff05-2182-40a4-b56a-5637e678c8df" satisfied condition "Succeeded or Failed" Apr 28 23:47:38.014: INFO: Trying to get logs from node latest-worker2 pod pod-ac17ff05-2182-40a4-b56a-5637e678c8df container test-container: STEP: delete the pod Apr 28 23:47:38.077: INFO: Waiting for pod pod-ac17ff05-2182-40a4-b56a-5637e678c8df to disappear Apr 28 23:47:38.082: INFO: Pod pod-ac17ff05-2182-40a4-b56a-5637e678c8df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:47:38.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5803" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":449,"failed":0} SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:47:38.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 28 23:47:42.258: INFO: Pod pod-hostip-40ead61f-29f1-4144-bcf7-22342e43ee37 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:47:42.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2300" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:47:42.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-b8f7f3cf-ec46-41ac-b8b6-16eb93c61153 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b8f7f3cf-ec46-41ac-b8b6-16eb93c61153 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:47:48.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6061" for this suite. • [SLOW TEST:6.178 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:47:48.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0428 23:47:49.625433 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 23:47:49.625: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:47:49.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8038" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":27,"skipped":534,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:47:49.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-lv5b STEP: Creating a pod to test atomic-volume-subpath Apr 28 23:47:49.820: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lv5b" in namespace "subpath-4560" to be "Succeeded or Failed" Apr 28 23:47:49.824: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316153ms Apr 28 23:47:51.859: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039048423s Apr 28 23:47:53.985: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 4.165092894s Apr 28 23:47:55.989: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 6.169355146s Apr 28 23:47:57.994: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 8.173734625s Apr 28 23:47:59.998: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 10.178153826s Apr 28 23:48:02.003: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 12.182937664s Apr 28 23:48:04.007: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 14.186708415s Apr 28 23:48:06.011: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 16.190936668s Apr 28 23:48:08.015: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 18.195402076s Apr 28 23:48:10.020: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 20.19983737s Apr 28 23:48:12.024: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Running", Reason="", readiness=true. Elapsed: 22.204075103s Apr 28 23:48:14.029: INFO: Pod "pod-subpath-test-secret-lv5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.208772242s STEP: Saw pod success Apr 28 23:48:14.029: INFO: Pod "pod-subpath-test-secret-lv5b" satisfied condition "Succeeded or Failed" Apr 28 23:48:14.032: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-lv5b container test-container-subpath-secret-lv5b: STEP: delete the pod Apr 28 23:48:14.084: INFO: Waiting for pod pod-subpath-test-secret-lv5b to disappear Apr 28 23:48:14.100: INFO: Pod pod-subpath-test-secret-lv5b no longer exists STEP: Deleting pod pod-subpath-test-secret-lv5b Apr 28 23:48:14.100: INFO: Deleting pod "pod-subpath-test-secret-lv5b" in namespace "subpath-4560" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:48:14.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4560" for this suite. • [SLOW TEST:24.479 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":28,"skipped":541,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:48:14.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:48:31.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-388" for this suite. • [SLOW TEST:17.173 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":29,"skipped":546,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:48:31.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:48:31.348: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:48:32.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5671" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":30,"skipped":561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:48:32.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:48:32.482: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:48:34.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-498" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":31,"skipped":638,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:48:34.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-5562/configmap-test-2279285d-2056-481e-ac81-09212927dbf6 STEP: Creating a pod to test consume configMaps Apr 28 23:48:34.643: INFO: Waiting up to 5m0s for pod "pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8" in namespace "configmap-5562" to be "Succeeded or Failed" Apr 28 23:48:34.647: INFO: Pod "pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.843772ms Apr 28 23:48:36.786: INFO: Pod "pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142756351s Apr 28 23:48:38.789: INFO: Pod "pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146446508s Apr 28 23:48:40.802: INFO: Pod "pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159189375s STEP: Saw pod success Apr 28 23:48:40.802: INFO: Pod "pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8" satisfied condition "Succeeded or Failed" Apr 28 23:48:40.805: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8 container env-test: STEP: delete the pod Apr 28 23:48:40.834: INFO: Waiting for pod pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8 to disappear Apr 28 23:48:40.838: INFO: Pod pod-configmaps-3af12a31-0cae-4c28-81b3-490f5399d5b8 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:48:40.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5562" for this suite. • [SLOW TEST:6.296 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":644,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:48:40.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:48:40.924: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:48:45.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8467" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":645,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:48:45.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 23:48:49.251: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:48:49.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-497" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":666,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:48:49.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 28 23:48:49.355: INFO: Waiting up to 5m0s for pod "pod-4fed1e73-b0dc-4514-a902-9f62bafcaee7" in namespace "emptydir-9827" to be "Succeeded or Failed" Apr 28 23:48:49.372: INFO: Pod "pod-4fed1e73-b0dc-4514-a902-9f62bafcaee7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.213076ms Apr 28 23:48:51.390: INFO: Pod "pod-4fed1e73-b0dc-4514-a902-9f62bafcaee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03473239s Apr 28 23:48:53.394: INFO: Pod "pod-4fed1e73-b0dc-4514-a902-9f62bafcaee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039173283s STEP: Saw pod success Apr 28 23:48:53.395: INFO: Pod "pod-4fed1e73-b0dc-4514-a902-9f62bafcaee7" satisfied condition "Succeeded or Failed" Apr 28 23:48:53.397: INFO: Trying to get logs from node latest-worker2 pod pod-4fed1e73-b0dc-4514-a902-9f62bafcaee7 container test-container: STEP: delete the pod Apr 28 23:48:53.415: INFO: Waiting for pod pod-4fed1e73-b0dc-4514-a902-9f62bafcaee7 to disappear Apr 28 23:48:53.419: INFO: Pod pod-4fed1e73-b0dc-4514-a902-9f62bafcaee7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:48:53.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9827" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":683,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:48:53.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 28 23:48:53.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6125' Apr 28 23:48:56.479: INFO: stderr: "" Apr 28 23:48:56.479: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 28 23:49:01.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6125 -o json' Apr 28 23:49:01.627: INFO: stderr: "" Apr 28 23:49:01.627: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-28T23:48:56Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6125\",\n \"resourceVersion\": \"11839424\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6125/pods/e2e-test-httpd-pod\",\n \"uid\": \"5d4be745-c64f-4b95-b3b7-ab779f12c4b7\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dklkj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dklkj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dklkj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T23:48:56Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T23:48:58Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T23:48:58Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T23:48:56Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c75dd245bb8a470ff670bd2db0881c758615851316c4255751be67a148213923\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-28T23:48:58Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.28\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.28\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-28T23:48:56Z\"\n }\n}\n" STEP: replace the image in the pod Apr 28 23:49:01.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6125' Apr 28 23:49:01.945: INFO: stderr: "" Apr 28 23:49:01.945: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 28 23:49:01.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6125' Apr 28 23:49:04.960: INFO: stderr: "" Apr 28 23:49:04.960: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:49:04.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6125" for this suite. • [SLOW TEST:11.540 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":36,"skipped":734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:49:04.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:49:09.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9057" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":37,"skipped":764,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:49:09.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 23:49:09.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ba5b7da-75b6-4f39-9bd7-a8bf33e0daee" in namespace "downward-api-3880" to be "Succeeded or Failed" Apr 28 23:49:09.660: INFO: Pod "downwardapi-volume-6ba5b7da-75b6-4f39-9bd7-a8bf33e0daee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.524329ms Apr 28 23:49:11.684: INFO: Pod "downwardapi-volume-6ba5b7da-75b6-4f39-9bd7-a8bf33e0daee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027710423s Apr 28 23:49:13.688: INFO: Pod "downwardapi-volume-6ba5b7da-75b6-4f39-9bd7-a8bf33e0daee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031430242s STEP: Saw pod success Apr 28 23:49:13.688: INFO: Pod "downwardapi-volume-6ba5b7da-75b6-4f39-9bd7-a8bf33e0daee" satisfied condition "Succeeded or Failed" Apr 28 23:49:13.691: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6ba5b7da-75b6-4f39-9bd7-a8bf33e0daee container client-container: STEP: delete the pod Apr 28 23:49:13.709: INFO: Waiting for pod downwardapi-volume-6ba5b7da-75b6-4f39-9bd7-a8bf33e0daee to disappear Apr 28 23:49:13.726: INFO: Pod downwardapi-volume-6ba5b7da-75b6-4f39-9bd7-a8bf33e0daee no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:49:13.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3880" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:49:13.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9053, will wait for the garbage collector to delete the pods Apr 28 23:49:19.914: INFO: Deleting Job.batch foo took: 5.99335ms Apr 28 23:49:20.215: INFO: Terminating Job.batch foo pods took: 300.209814ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:50:03.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9053" for this suite. • [SLOW TEST:49.293 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":39,"skipped":839,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:50:03.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 28 23:50:13.130: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.130: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.171253 7 log.go:172] (0xc002949130) (0xc0024d7900) Create stream I0428 23:50:13.171305 7 log.go:172] (0xc002949130) (0xc0024d7900) Stream added, broadcasting: 1 I0428 23:50:13.174791 7 log.go:172] (0xc002949130) Reply frame received for 1 I0428 23:50:13.174830 7 log.go:172] (0xc002949130) (0xc000d26000) Create stream I0428 23:50:13.174839 7 log.go:172] (0xc002949130) (0xc000d26000) Stream added, broadcasting: 3 I0428 23:50:13.175885 7 log.go:172] (0xc002949130) Reply frame received for 3 I0428 23:50:13.175952 7 log.go:172] (0xc002949130) (0xc0024d79a0) Create stream I0428 23:50:13.175969 7 log.go:172] (0xc002949130) (0xc0024d79a0) Stream added, broadcasting: 5 I0428 23:50:13.176982 7 log.go:172] (0xc002949130) Reply frame received for 5 I0428 23:50:13.258014 7 log.go:172] (0xc002949130) Data frame received for 5 I0428 23:50:13.258041 7 log.go:172] (0xc0024d79a0) (5) Data frame handling I0428 23:50:13.258077 7 log.go:172] (0xc002949130) Data frame received for 3 I0428 23:50:13.258094 7 log.go:172] (0xc000d26000) (3) Data frame handling I0428 23:50:13.258115 7 log.go:172] (0xc000d26000) (3) Data frame sent I0428 23:50:13.258132 7 log.go:172] (0xc002949130) Data frame received for 3 I0428 23:50:13.258142 7 log.go:172] (0xc000d26000) (3) Data frame handling I0428 23:50:13.259164 7 log.go:172] (0xc002949130) Data frame received for 1 I0428 23:50:13.259186 7 log.go:172] (0xc0024d7900) (1) Data frame handling I0428 23:50:13.259202 7 log.go:172] (0xc0024d7900) (1) Data frame sent I0428 23:50:13.259219 7 log.go:172] (0xc002949130) (0xc0024d7900) Stream removed, broadcasting: 1 I0428 23:50:13.259235 7 log.go:172] (0xc002949130) Go away received I0428 23:50:13.259646 7 log.go:172] (0xc002949130) (0xc0024d7900) Stream removed, broadcasting: 1 I0428 23:50:13.259669 7 log.go:172] (0xc002949130) (0xc000d26000) Stream removed, broadcasting: 3 I0428 23:50:13.259677 7 log.go:172] (0xc002949130) (0xc0024d79a0) Stream removed, broadcasting: 5 Apr 28 23:50:13.259: INFO: Exec stderr: "" Apr 28 23:50:13.259: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.259: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.286586 7 log.go:172] (0xc002b48c60) (0xc0024d7c20) Create stream I0428 23:50:13.286613 7 log.go:172] (0xc002b48c60) (0xc0024d7c20) Stream added, broadcasting: 1 I0428 23:50:13.289582 7 log.go:172] (0xc002b48c60) Reply frame received for 1 I0428 23:50:13.289634 7 log.go:172] (0xc002b48c60) (0xc0024d7cc0) Create stream I0428 23:50:13.289653 7 log.go:172] (0xc002b48c60) (0xc0024d7cc0) Stream added, broadcasting: 3 I0428 23:50:13.290569 7 log.go:172] (0xc002b48c60) Reply frame received for 3 I0428 23:50:13.290601 7 log.go:172] (0xc002b48c60) (0xc000d260a0) Create stream I0428 23:50:13.290625 7 log.go:172] (0xc002b48c60) (0xc000d260a0) Stream added, broadcasting: 5 I0428 23:50:13.291412 7 log.go:172] (0xc002b48c60) Reply frame received for 5 I0428 23:50:13.355912 7 log.go:172] (0xc002b48c60) Data frame received for 5 I0428 23:50:13.355956 7 log.go:172] (0xc000d260a0) (5) Data frame handling I0428 23:50:13.355985 7 log.go:172] (0xc002b48c60) Data frame received for 3 I0428 23:50:13.355999 7 log.go:172] (0xc0024d7cc0) (3) Data frame handling I0428 23:50:13.356016 7 log.go:172] (0xc0024d7cc0) (3) Data frame sent I0428 23:50:13.356122 7 log.go:172] (0xc002b48c60) Data frame received for 3 I0428 23:50:13.356160 7 log.go:172] (0xc0024d7cc0) (3) Data frame handling I0428 23:50:13.358173 7 log.go:172] (0xc002b48c60) Data frame received for 1 I0428 23:50:13.358201 7 log.go:172] (0xc0024d7c20) (1) Data frame handling I0428 23:50:13.358219 7 log.go:172] (0xc0024d7c20) (1) Data frame sent I0428 23:50:13.358275 7 log.go:172] (0xc002b48c60) (0xc0024d7c20) Stream removed, broadcasting: 1 I0428 23:50:13.358308 7 log.go:172] (0xc002b48c60) Go away received I0428 23:50:13.358356 7 log.go:172] (0xc002b48c60) (0xc0024d7c20) Stream removed, broadcasting: 1 I0428 23:50:13.358372 7 log.go:172] (0xc002b48c60) (0xc0024d7cc0) Stream removed, broadcasting: 3 I0428 23:50:13.358378 7 log.go:172] (0xc002b48c60) (0xc000d260a0) Stream removed, broadcasting: 5 Apr 28 23:50:13.358: INFO: Exec stderr: "" Apr 28 23:50:13.358: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.358: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.385420 7 log.go:172] (0xc0037d2420) (0xc000d26780) Create stream I0428 23:50:13.385442 7 log.go:172] (0xc0037d2420) (0xc000d26780) Stream added, broadcasting: 1 I0428 23:50:13.388253 7 log.go:172] (0xc0037d2420) Reply frame received for 1 I0428 23:50:13.388299 7 log.go:172] (0xc0037d2420) (0xc00010cb40) Create stream I0428 23:50:13.388312 7 log.go:172] (0xc0037d2420) (0xc00010cb40) Stream added, broadcasting: 3 I0428 23:50:13.389387 7 log.go:172] (0xc0037d2420) Reply frame received for 3 I0428 23:50:13.389441 7 log.go:172] (0xc0037d2420) (0xc000d26b40) Create stream I0428 23:50:13.389458 7 log.go:172] (0xc0037d2420) (0xc000d26b40) Stream added, broadcasting: 5 I0428 23:50:13.390366 7 log.go:172] (0xc0037d2420) Reply frame received for 5 I0428 23:50:13.452094 7 log.go:172] (0xc0037d2420) Data frame received for 5 I0428 23:50:13.452153 7 log.go:172] (0xc0037d2420) Data frame received for 3 I0428 23:50:13.452195 7 log.go:172] (0xc00010cb40) (3) Data frame handling I0428 23:50:13.452213 7 log.go:172] (0xc00010cb40) (3) Data frame sent I0428 23:50:13.452232 7 log.go:172] (0xc0037d2420) Data frame received for 3 I0428 23:50:13.452249 7 log.go:172] (0xc00010cb40) (3) Data frame handling I0428 23:50:13.452281 7 log.go:172] (0xc000d26b40) (5) Data frame handling I0428 23:50:13.453764 7 log.go:172] (0xc0037d2420) Data frame received for 1 I0428 23:50:13.453857 7 log.go:172] (0xc000d26780) (1) Data frame handling I0428 23:50:13.453899 7 log.go:172] (0xc000d26780) (1) Data frame sent I0428 23:50:13.453976 7 log.go:172] (0xc0037d2420) (0xc000d26780) Stream removed, broadcasting: 1 I0428 23:50:13.454050 7 log.go:172] (0xc0037d2420) (0xc000d26780) Stream removed, broadcasting: 1 I0428 23:50:13.454060 7 log.go:172] (0xc0037d2420) (0xc00010cb40) Stream removed, broadcasting: 3 I0428 23:50:13.454071 7 log.go:172] (0xc0037d2420) (0xc000d26b40) Stream removed, broadcasting: 5 Apr 28 23:50:13.454: INFO: Exec stderr: "" I0428 23:50:13.454093 7 log.go:172] (0xc0037d2420) Go away received Apr 28 23:50:13.454: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.454: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.479226 7 log.go:172] (0xc00376e420) (0xc002c93220) Create stream I0428 23:50:13.479248 7 log.go:172] (0xc00376e420) (0xc002c93220) Stream added, broadcasting: 1 I0428 23:50:13.482288 7 log.go:172] (0xc00376e420) Reply frame received for 1 I0428 23:50:13.482329 7 log.go:172] (0xc00376e420) (0xc002c932c0) Create stream I0428 23:50:13.482347 7 log.go:172] (0xc00376e420) (0xc002c932c0) Stream added, broadcasting: 3 I0428 23:50:13.483343 7 log.go:172] (0xc00376e420) Reply frame received for 3 I0428 23:50:13.483370 7 log.go:172] (0xc00376e420) (0xc002c93360) Create stream I0428 23:50:13.483380 7 log.go:172] (0xc00376e420) (0xc002c93360) Stream added, broadcasting: 5 I0428 23:50:13.484219 7 log.go:172] (0xc00376e420) Reply frame received for 5 I0428 23:50:13.557929 7 log.go:172] (0xc00376e420) Data frame received for 5 I0428 23:50:13.557981 7 log.go:172] (0xc002c93360) (5) Data frame handling I0428 23:50:13.558007 7 log.go:172] (0xc00376e420) Data frame received for 3 I0428 23:50:13.558024 7 log.go:172] (0xc002c932c0) (3) Data frame handling I0428 23:50:13.558045 7 log.go:172] (0xc002c932c0) (3) Data frame sent I0428 23:50:13.558062 7 log.go:172] (0xc00376e420) Data frame received for 3 I0428 23:50:13.558079 7 log.go:172] (0xc002c932c0) (3) Data frame handling I0428 23:50:13.559531 7 log.go:172] (0xc00376e420) Data frame received for 1 I0428 23:50:13.559567 7 log.go:172] (0xc002c93220) (1) Data frame handling I0428 23:50:13.559591 7 log.go:172] (0xc002c93220) (1) Data frame sent I0428 23:50:13.559614 7 log.go:172] (0xc00376e420) (0xc002c93220) Stream removed, broadcasting: 1 I0428 23:50:13.559638 7 log.go:172] (0xc00376e420) Go away received I0428 23:50:13.559747 7 log.go:172] (0xc00376e420) (0xc002c93220) Stream removed, broadcasting: 1 I0428 23:50:13.559786 7 log.go:172] (0xc00376e420) (0xc002c932c0) Stream removed, broadcasting: 3 I0428 23:50:13.559803 7 log.go:172] (0xc00376e420) (0xc002c93360) Stream removed, broadcasting: 5 Apr 28 23:50:13.559: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 28 23:50:13.559: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.559: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.592758 7 log.go:172] (0xc002b49290) (0xc000a4e000) Create stream I0428 23:50:13.592780 7 log.go:172] (0xc002b49290) (0xc000a4e000) Stream added, broadcasting: 1 I0428 23:50:13.595676 7 log.go:172] (0xc002b49290) Reply frame received for 1 I0428 23:50:13.595699 7 log.go:172] (0xc002b49290) (0xc002afad20) Create stream I0428 23:50:13.595711 7 log.go:172] (0xc002b49290) (0xc002afad20) Stream added, broadcasting: 3 I0428 23:50:13.596621 7 log.go:172] (0xc002b49290) Reply frame received for 3 I0428 23:50:13.596670 7 log.go:172] (0xc002b49290) (0xc002afadc0) Create stream I0428 23:50:13.596683 7 log.go:172] (0xc002b49290) (0xc002afadc0) Stream added, broadcasting: 5 I0428 23:50:13.597995 7 log.go:172] (0xc002b49290) Reply frame received for 5 I0428 23:50:13.660424 7 log.go:172] (0xc002b49290) Data frame received for 5 I0428 23:50:13.660487 7 log.go:172] (0xc002afadc0) (5) Data frame handling I0428 23:50:13.660532 7 log.go:172] (0xc002b49290) Data frame received for 3 I0428 23:50:13.660560 7 log.go:172] (0xc002afad20) (3) Data frame handling I0428 23:50:13.660595 7 log.go:172] (0xc002afad20) (3) Data frame sent I0428 23:50:13.660613 7 log.go:172] (0xc002b49290) Data frame received for 3 I0428 23:50:13.660625 7 log.go:172] (0xc002afad20) (3) Data frame handling I0428 23:50:13.662514 7 log.go:172] (0xc002b49290) Data frame received for 1 I0428 23:50:13.662558 7 log.go:172] (0xc000a4e000) (1) Data frame handling I0428 23:50:13.662582 7 log.go:172] (0xc000a4e000) (1) Data frame sent I0428 23:50:13.662602 7 log.go:172] (0xc002b49290) (0xc000a4e000) Stream removed, broadcasting: 1 I0428 23:50:13.662625 7 log.go:172] (0xc002b49290) Go away received I0428 23:50:13.662720 7 log.go:172] (0xc002b49290) (0xc000a4e000) Stream removed, broadcasting: 1 I0428 23:50:13.662752 7 log.go:172] (0xc002b49290) (0xc002afad20) Stream removed, broadcasting: 3 I0428 23:50:13.662767 7 log.go:172] (0xc002b49290) (0xc002afadc0) Stream removed, broadcasting: 5 Apr 28 23:50:13.662: INFO: Exec stderr: "" Apr 28 23:50:13.662: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.662: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.697574 7 log.go:172] (0xc00376ea50) (0xc002c93540) Create stream I0428 23:50:13.697621 7 log.go:172] (0xc00376ea50) (0xc002c93540) Stream added, broadcasting: 1 I0428 23:50:13.701841 7 log.go:172] (0xc00376ea50) Reply frame received for 1 I0428 23:50:13.701896 7 log.go:172] (0xc00376ea50) (0xc002afae60) Create stream I0428 23:50:13.701913 7 log.go:172] (0xc00376ea50) (0xc002afae60) Stream added, broadcasting: 3 I0428 23:50:13.702958 7 log.go:172] (0xc00376ea50) Reply frame received for 3 I0428 23:50:13.702995 7 log.go:172] (0xc00376ea50) (0xc002afaf00) Create stream I0428 23:50:13.703010 7 log.go:172] (0xc00376ea50) (0xc002afaf00) Stream added, broadcasting: 5 I0428 23:50:13.704274 7 log.go:172] (0xc00376ea50) Reply frame received for 5 I0428 23:50:13.768274 7 log.go:172] (0xc00376ea50) Data frame received for 3 I0428 23:50:13.768321 7 log.go:172] (0xc002afae60) (3) Data frame handling I0428 23:50:13.768367 7 log.go:172] (0xc00376ea50) Data frame received for 5 I0428 23:50:13.768388 7 log.go:172] (0xc002afaf00) (5) Data frame handling I0428 23:50:13.768427 7 log.go:172] (0xc002afae60) (3) Data frame sent I0428 23:50:13.768443 7 log.go:172] (0xc00376ea50) Data frame received for 3 I0428 23:50:13.768453 7 log.go:172] (0xc002afae60) (3) Data frame handling I0428 23:50:13.770320 7 log.go:172] (0xc00376ea50) Data frame received for 1 I0428 23:50:13.770341 7 log.go:172] (0xc002c93540) (1) Data frame handling I0428 23:50:13.770362 7 log.go:172] (0xc002c93540) (1) Data frame sent I0428 23:50:13.770417 7 log.go:172] (0xc00376ea50) (0xc002c93540) Stream removed, broadcasting: 1 I0428 23:50:13.770517 7 log.go:172] (0xc00376ea50) Go away received I0428 23:50:13.770582 7 log.go:172] (0xc00376ea50) (0xc002c93540) Stream removed, broadcasting: 1 I0428 23:50:13.770621 7 log.go:172] (0xc00376ea50) (0xc002afae60) Stream removed, broadcasting: 3 I0428 23:50:13.770650 7 log.go:172] (0xc00376ea50) (0xc002afaf00) Stream removed, broadcasting: 5 Apr 28 23:50:13.770: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 28 23:50:13.770: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.770: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.806761 7 log.go:172] (0xc002e92580) (0xc002afb180) Create stream I0428 23:50:13.806790 7 log.go:172] (0xc002e92580) (0xc002afb180) Stream added, broadcasting: 1 I0428 23:50:13.809470 7 log.go:172] (0xc002e92580) Reply frame received for 1 I0428 23:50:13.809511 7 log.go:172] (0xc002e92580) (0xc00010cf00) Create stream I0428 23:50:13.809525 7 log.go:172] (0xc002e92580) (0xc00010cf00) Stream added, broadcasting: 3 I0428 23:50:13.810624 7 log.go:172] (0xc002e92580) Reply frame received for 3 I0428 23:50:13.810656 7 log.go:172] (0xc002e92580) (0xc002c935e0) Create stream I0428 23:50:13.810668 7 log.go:172] (0xc002e92580) (0xc002c935e0) Stream added, broadcasting: 5 I0428 23:50:13.811495 7 log.go:172] (0xc002e92580) Reply frame received for 5 I0428 23:50:13.881840 7 log.go:172] (0xc002e92580) Data frame received for 3 I0428 23:50:13.881863 7 log.go:172] (0xc00010cf00) (3) Data frame handling I0428 23:50:13.881871 7 log.go:172] (0xc00010cf00) (3) Data frame sent I0428 23:50:13.881877 7 log.go:172] (0xc002e92580) Data frame received for 3 I0428 23:50:13.881882 7 log.go:172] (0xc00010cf00) (3) Data frame handling I0428 23:50:13.881891 7 log.go:172] (0xc002e92580) Data frame received for 5 I0428 23:50:13.881901 7 log.go:172] (0xc002c935e0) (5) Data frame handling I0428 23:50:13.883396 7 log.go:172] (0xc002e92580) Data frame received for 1 I0428 23:50:13.883414 7 log.go:172] (0xc002afb180) (1) Data frame handling I0428 23:50:13.883433 7 log.go:172] (0xc002afb180) (1) Data frame sent I0428 23:50:13.883446 7 log.go:172] (0xc002e92580) (0xc002afb180) Stream removed, broadcasting: 1 I0428 23:50:13.883496 7 log.go:172] (0xc002e92580) Go away received I0428 23:50:13.883581 7 log.go:172] (0xc002e92580) (0xc002afb180) Stream removed, broadcasting: 1 I0428 23:50:13.883609 7 log.go:172] (0xc002e92580) (0xc00010cf00) Stream removed, broadcasting: 3 I0428 23:50:13.883622 7 log.go:172] (0xc002e92580) (0xc002c935e0) Stream removed, broadcasting: 5 Apr 28 23:50:13.883: INFO: Exec stderr: "" Apr 28 23:50:13.883: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.883: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.918218 7 log.go:172] (0xc002e92b00) (0xc002afb2c0) Create stream I0428 23:50:13.918242 7 log.go:172] (0xc002e92b00) (0xc002afb2c0) Stream added, broadcasting: 1 I0428 23:50:13.920695 7 log.go:172] (0xc002e92b00) Reply frame received for 1 I0428 23:50:13.920753 7 log.go:172] (0xc002e92b00) (0xc000d26d20) Create stream I0428 23:50:13.920779 7 log.go:172] (0xc002e92b00) (0xc000d26d20) Stream added, broadcasting: 3 I0428 23:50:13.921906 7 log.go:172] (0xc002e92b00) Reply frame received for 3 I0428 23:50:13.921952 7 log.go:172] (0xc002e92b00) (0xc000a4e280) Create stream I0428 23:50:13.921970 7 log.go:172] (0xc002e92b00) (0xc000a4e280) Stream added, broadcasting: 5 I0428 23:50:13.922986 7 log.go:172] (0xc002e92b00) Reply frame received for 5 I0428 23:50:13.984735 7 log.go:172] (0xc002e92b00) Data frame received for 5 I0428 23:50:13.984771 7 log.go:172] (0xc000a4e280) (5) Data frame handling I0428 23:50:13.984800 7 log.go:172] (0xc002e92b00) Data frame received for 3 I0428 23:50:13.984828 7 log.go:172] (0xc000d26d20) (3) Data frame handling I0428 23:50:13.984864 7 log.go:172] (0xc000d26d20) (3) Data frame sent I0428 23:50:13.984882 7 log.go:172] (0xc002e92b00) Data frame received for 3 I0428 23:50:13.984899 7 log.go:172] (0xc000d26d20) (3) Data frame handling I0428 23:50:13.986812 7 log.go:172] (0xc002e92b00) Data frame received for 1 I0428 23:50:13.986847 7 log.go:172] (0xc002afb2c0) (1) Data frame handling I0428 23:50:13.986946 7 log.go:172] (0xc002afb2c0) (1) Data frame sent I0428 23:50:13.986973 7 log.go:172] (0xc002e92b00) (0xc002afb2c0) Stream removed, broadcasting: 1 I0428 23:50:13.987079 7 log.go:172] (0xc002e92b00) (0xc002afb2c0) Stream removed, broadcasting: 1 I0428 23:50:13.987150 7 log.go:172] (0xc002e92b00) (0xc000d26d20) Stream removed, broadcasting: 3 I0428 23:50:13.987171 7 log.go:172] (0xc002e92b00) (0xc000a4e280) Stream removed, broadcasting: 5 Apr 28 23:50:13.987: INFO: Exec stderr: "" Apr 28 23:50:13.987: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:13.987: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:13.988582 7 log.go:172] (0xc002e92b00) Go away received I0428 23:50:14.018088 7 log.go:172] (0xc002b49ad0) (0xc000a4e780) Create stream I0428 23:50:14.018126 7 log.go:172] (0xc002b49ad0) (0xc000a4e780) Stream added, broadcasting: 1 I0428 23:50:14.020481 7 log.go:172] (0xc002b49ad0) Reply frame received for 1 I0428 23:50:14.020532 7 log.go:172] (0xc002b49ad0) (0xc000d26fa0) Create stream I0428 23:50:14.020555 7 log.go:172] (0xc002b49ad0) (0xc000d26fa0) Stream added, broadcasting: 3 I0428 23:50:14.021594 7 log.go:172] (0xc002b49ad0) Reply frame received for 3 I0428 23:50:14.021666 7 log.go:172] (0xc002b49ad0) (0xc000d274a0) Create stream I0428 23:50:14.021680 7 log.go:172] (0xc002b49ad0) (0xc000d274a0) Stream added, broadcasting: 5 I0428 23:50:14.022503 7 log.go:172] (0xc002b49ad0) Reply frame received for 5 I0428 23:50:14.088474 7 log.go:172] (0xc002b49ad0) Data frame received for 5 I0428 23:50:14.088553 7 log.go:172] (0xc000d274a0) (5) Data frame handling I0428 23:50:14.088607 7 log.go:172] (0xc002b49ad0) Data frame received for 3 I0428 23:50:14.088636 7 log.go:172] (0xc000d26fa0) (3) Data frame handling I0428 23:50:14.088676 7 log.go:172] (0xc000d26fa0) (3) Data frame sent I0428 23:50:14.088706 7 log.go:172] (0xc002b49ad0) Data frame received for 3 I0428 23:50:14.088718 7 log.go:172] (0xc000d26fa0) (3) Data frame handling I0428 23:50:14.090248 7 log.go:172] (0xc002b49ad0) Data frame received for 1 I0428 23:50:14.090267 7 log.go:172] (0xc000a4e780) (1) Data frame handling I0428 23:50:14.090284 7 log.go:172] (0xc000a4e780) (1) Data frame sent I0428 23:50:14.090296 7 log.go:172] (0xc002b49ad0) (0xc000a4e780) Stream removed, broadcasting: 1 I0428 23:50:14.090426 7 log.go:172] (0xc002b49ad0) (0xc000a4e780) Stream removed, broadcasting: 1 I0428 23:50:14.090451 7 log.go:172] (0xc002b49ad0) (0xc000d26fa0) Stream removed, broadcasting: 3 I0428 23:50:14.090467 7 log.go:172] (0xc002b49ad0) Go away received I0428 23:50:14.090505 7 log.go:172] (0xc002b49ad0) (0xc000d274a0) Stream removed, broadcasting: 5 Apr 28 23:50:14.090: INFO: Exec stderr: "" Apr 28 23:50:14.090: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-372 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:50:14.090: INFO: >>> kubeConfig: /root/.kube/config I0428 23:50:14.119823 7 log.go:172] (0xc0037d2a50) (0xc000d27b80) Create stream I0428 23:50:14.119843 7 log.go:172] (0xc0037d2a50) (0xc000d27b80) Stream added, broadcasting: 1 I0428 23:50:14.122671 7 log.go:172] (0xc0037d2a50) Reply frame received for 1 I0428 23:50:14.122710 7 log.go:172] (0xc0037d2a50) (0xc000a4e960) Create stream I0428 23:50:14.122724 7 log.go:172] (0xc0037d2a50) (0xc000a4e960) Stream added, broadcasting: 3 I0428 23:50:14.123860 7 log.go:172] (0xc0037d2a50) Reply frame received for 3 I0428 23:50:14.123897 7 log.go:172] (0xc0037d2a50) (0xc002c93680) Create stream I0428 23:50:14.123911 7 log.go:172] (0xc0037d2a50) (0xc002c93680) Stream added, broadcasting: 5 I0428 23:50:14.124989 7 log.go:172] (0xc0037d2a50) Reply frame received for 5 I0428 23:50:14.198929 7 log.go:172] (0xc0037d2a50) Data frame received for 5 I0428 23:50:14.198976 7 log.go:172] (0xc002c93680) (5) Data frame handling I0428 23:50:14.199012 7 log.go:172] (0xc0037d2a50) Data frame received for 3 I0428 23:50:14.199051 7 log.go:172] (0xc000a4e960) (3) Data frame handling I0428 23:50:14.199086 7 log.go:172] (0xc000a4e960) (3) Data frame sent I0428 23:50:14.199109 7 log.go:172] (0xc0037d2a50) Data frame received for 3 I0428 23:50:14.199126 7 log.go:172] (0xc000a4e960) (3) Data frame handling I0428 23:50:14.200479 7 log.go:172] (0xc0037d2a50) Data frame received for 1 I0428 23:50:14.200493 7 log.go:172] (0xc000d27b80) (1) Data frame handling I0428 23:50:14.200500 7 log.go:172] (0xc000d27b80) (1) Data frame sent I0428 23:50:14.200508 7 log.go:172] (0xc0037d2a50) (0xc000d27b80) Stream removed, broadcasting: 1 I0428 23:50:14.200540 7 log.go:172] (0xc0037d2a50) Go away received I0428 23:50:14.200637 7 log.go:172] (0xc0037d2a50) (0xc000d27b80) Stream removed, broadcasting: 1 I0428 23:50:14.200666 7 log.go:172] (0xc0037d2a50) (0xc000a4e960) Stream removed, broadcasting: 3 I0428 23:50:14.200689 7 log.go:172] (0xc0037d2a50) (0xc002c93680) Stream removed, broadcasting: 5 Apr 28 23:50:14.200: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:50:14.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-372" for this suite. • [SLOW TEST:11.181 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":859,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:50:14.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-1f9cd623-615f-4ee3-b575-7799f138dfb8 STEP: Creating a pod to test consume configMaps Apr 28 23:50:14.346: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b51460b-676e-4446-897e-838ccd494009" in namespace "projected-1975" to be "Succeeded or Failed" Apr 28 23:50:14.365: INFO: Pod "pod-projected-configmaps-3b51460b-676e-4446-897e-838ccd494009": Phase="Pending", Reason="", readiness=false. Elapsed: 19.631546ms Apr 28 23:50:16.369: INFO: Pod "pod-projected-configmaps-3b51460b-676e-4446-897e-838ccd494009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023749426s Apr 28 23:50:18.374: INFO: Pod "pod-projected-configmaps-3b51460b-676e-4446-897e-838ccd494009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028161538s STEP: Saw pod success Apr 28 23:50:18.374: INFO: Pod "pod-projected-configmaps-3b51460b-676e-4446-897e-838ccd494009" satisfied condition "Succeeded or Failed" Apr 28 23:50:18.377: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-3b51460b-676e-4446-897e-838ccd494009 container projected-configmap-volume-test: STEP: delete the pod Apr 28 23:50:18.414: INFO: Waiting for pod pod-projected-configmaps-3b51460b-676e-4446-897e-838ccd494009 to disappear Apr 28 23:50:18.428: INFO: Pod pod-projected-configmaps-3b51460b-676e-4446-897e-838ccd494009 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:50:18.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1975" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":859,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:50:18.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 28 23:50:18.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8755' Apr 28 23:50:18.746: INFO: stderr: "" Apr 28 23:50:18.746: INFO: stdout: "pod/pause created\n" Apr 28 23:50:18.746: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 28 23:50:18.746: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8755" to be "running and ready" Apr 28 23:50:18.751: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.898712ms Apr 28 23:50:20.755: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009089949s Apr 28 23:50:22.759: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.013156597s Apr 28 23:50:22.759: INFO: Pod "pause" satisfied condition "running and ready" Apr 28 23:50:22.759: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 28 23:50:22.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8755' Apr 28 23:50:22.865: INFO: stderr: "" Apr 28 23:50:22.865: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 28 23:50:22.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8755' Apr 28 23:50:22.957: INFO: stderr: "" Apr 28 23:50:22.957: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 28 23:50:22.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8755' Apr 28 23:50:23.056: INFO: stderr: "" Apr 28 23:50:23.056: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 28 23:50:23.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8755' Apr 28 23:50:23.157: INFO: stderr: "" Apr 28 23:50:23.157: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 28 23:50:23.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8755' Apr 28 23:50:23.283: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 23:50:23.283: INFO: stdout: "pod \"pause\" force deleted\n" Apr 28 23:50:23.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8755' Apr 28 23:50:23.396: INFO: stderr: "No resources found in kubectl-8755 namespace.\n" Apr 28 23:50:23.396: INFO: stdout: "" Apr 28 23:50:23.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8755 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 23:50:23.523: INFO: stderr: "" Apr 28 23:50:23.523: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:50:23.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8755" for this suite. • [SLOW TEST:5.236 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":42,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:50:23.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7725 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7725 STEP: Creating statefulset with conflicting port in namespace statefulset-7725 STEP: Waiting until pod test-pod will start running in namespace statefulset-7725 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7725 Apr 28 23:50:27.975: INFO: Observed stateful pod in namespace: statefulset-7725, name: ss-0, uid: 7a95933a-b6f7-4068-9e8f-753874b0aac0, status phase: Pending. Waiting for statefulset controller to delete. Apr 28 23:50:32.967: INFO: Observed stateful pod in namespace: statefulset-7725, name: ss-0, uid: 7a95933a-b6f7-4068-9e8f-753874b0aac0, status phase: Failed. Waiting for statefulset controller to delete. Apr 28 23:50:32.975: INFO: Observed stateful pod in namespace: statefulset-7725, name: ss-0, uid: 7a95933a-b6f7-4068-9e8f-753874b0aac0, status phase: Failed. Waiting for statefulset controller to delete. Apr 28 23:50:33.047: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7725 STEP: Removing pod with conflicting port in namespace statefulset-7725 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7725 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 28 23:50:37.145: INFO: Deleting all statefulset in ns statefulset-7725 Apr 28 23:50:37.148: INFO: Scaling statefulset ss to 0 Apr 28 23:50:47.170: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 23:50:47.173: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:50:47.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7725" for this suite. • [SLOW TEST:23.546 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":43,"skipped":887,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:50:47.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 28 23:50:47.300: INFO: Waiting up to 5m0s for pod "pod-6b541491-90a1-4b9a-a16a-c6efb19fd767" in namespace "emptydir-9471" to be "Succeeded or Failed" Apr 28 23:50:47.302: INFO: Pod "pod-6b541491-90a1-4b9a-a16a-c6efb19fd767": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030828ms Apr 28 23:50:49.315: INFO: Pod "pod-6b541491-90a1-4b9a-a16a-c6efb19fd767": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014845932s Apr 28 23:50:51.321: INFO: Pod "pod-6b541491-90a1-4b9a-a16a-c6efb19fd767": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02067396s STEP: Saw pod success Apr 28 23:50:51.321: INFO: Pod "pod-6b541491-90a1-4b9a-a16a-c6efb19fd767" satisfied condition "Succeeded or Failed" Apr 28 23:50:51.323: INFO: Trying to get logs from node latest-worker pod pod-6b541491-90a1-4b9a-a16a-c6efb19fd767 container test-container: STEP: delete the pod Apr 28 23:50:51.346: INFO: Waiting for pod pod-6b541491-90a1-4b9a-a16a-c6efb19fd767 to disappear Apr 28 23:50:51.351: INFO: Pod pod-6b541491-90a1-4b9a-a16a-c6efb19fd767 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:50:51.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9471" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":895,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:50:51.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-3f9d2e49-c5eb-4e79-9720-92a19458494e STEP: Creating a pod to test consume configMaps Apr 28 23:50:51.479: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c62a744-f088-492d-842d-4ac881356134" in namespace "projected-1842" to be "Succeeded or Failed" Apr 28 23:50:51.532: INFO: Pod "pod-projected-configmaps-4c62a744-f088-492d-842d-4ac881356134": Phase="Pending", Reason="", readiness=false. Elapsed: 52.293115ms Apr 28 23:50:53.646: INFO: Pod "pod-projected-configmaps-4c62a744-f088-492d-842d-4ac881356134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167019968s Apr 28 23:50:55.650: INFO: Pod "pod-projected-configmaps-4c62a744-f088-492d-842d-4ac881356134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171010158s STEP: Saw pod success Apr 28 23:50:55.651: INFO: Pod "pod-projected-configmaps-4c62a744-f088-492d-842d-4ac881356134" satisfied condition "Succeeded or Failed" Apr 28 23:50:55.653: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-4c62a744-f088-492d-842d-4ac881356134 container projected-configmap-volume-test: STEP: delete the pod Apr 28 23:50:55.695: INFO: Waiting for pod pod-projected-configmaps-4c62a744-f088-492d-842d-4ac881356134 to disappear Apr 28 23:50:55.698: INFO: Pod pod-projected-configmaps-4c62a744-f088-492d-842d-4ac881356134 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:50:55.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1842" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":910,"failed":0} ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:50:55.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 28 23:50:55.814: INFO: Waiting up to 5m0s for pod "downward-api-c8fbfaa1-3e6a-4595-a972-4fc201304233" in namespace "downward-api-7561" to be "Succeeded or Failed" Apr 28 23:50:55.818: INFO: Pod "downward-api-c8fbfaa1-3e6a-4595-a972-4fc201304233": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261302ms Apr 28 23:50:57.822: INFO: Pod "downward-api-c8fbfaa1-3e6a-4595-a972-4fc201304233": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008387467s Apr 28 23:50:59.826: INFO: Pod "downward-api-c8fbfaa1-3e6a-4595-a972-4fc201304233": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012509768s STEP: Saw pod success Apr 28 23:50:59.826: INFO: Pod "downward-api-c8fbfaa1-3e6a-4595-a972-4fc201304233" satisfied condition "Succeeded or Failed" Apr 28 23:50:59.829: INFO: Trying to get logs from node latest-worker pod downward-api-c8fbfaa1-3e6a-4595-a972-4fc201304233 container dapi-container: STEP: delete the pod Apr 28 23:50:59.850: INFO: Waiting for pod downward-api-c8fbfaa1-3e6a-4595-a972-4fc201304233 to disappear Apr 28 23:50:59.854: INFO: Pod downward-api-c8fbfaa1-3e6a-4595-a972-4fc201304233 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:50:59.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7561" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":910,"failed":0} S ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:50:59.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 28 23:50:59.952: INFO: Created pod &Pod{ObjectMeta:{dns-2739 dns-2739 /api/v1/namespaces/dns-2739/pods/dns-2739 3a80e986-5574-4e61-8f03-1ad4bd249d40 11840266 0 2020-04-28 23:50:59 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ltlln,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ltlln,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ltlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 28 23:50:59.956: INFO: The status of Pod dns-2739 is Pending, waiting for it to be Running (with Ready = true) Apr 28 23:51:01.960: INFO: The status of Pod dns-2739 is Pending, waiting for it to be Running (with Ready = true) Apr 28 23:51:03.960: INFO: The status of Pod dns-2739 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 28 23:51:03.960: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2739 PodName:dns-2739 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:51:03.960: INFO: >>> kubeConfig: /root/.kube/config I0428 23:51:03.995901 7 log.go:172] (0xc0028358c0) (0xc000d83cc0) Create stream I0428 23:51:03.995944 7 log.go:172] (0xc0028358c0) (0xc000d83cc0) Stream added, broadcasting: 1 I0428 23:51:03.998722 7 log.go:172] (0xc0028358c0) Reply frame received for 1 I0428 23:51:03.998764 7 log.go:172] (0xc0028358c0) (0xc001116aa0) Create stream I0428 23:51:03.998779 7 log.go:172] (0xc0028358c0) (0xc001116aa0) Stream added, broadcasting: 3 I0428 23:51:04.000021 7 log.go:172] (0xc0028358c0) Reply frame received for 3 I0428 23:51:04.000087 7 log.go:172] (0xc0028358c0) (0xc001060960) Create stream I0428 23:51:04.000127 7 log.go:172] (0xc0028358c0) (0xc001060960) Stream added, broadcasting: 5 I0428 23:51:04.001076 7 log.go:172] (0xc0028358c0) Reply frame received for 5 I0428 23:51:04.094662 7 log.go:172] (0xc0028358c0) Data frame received for 3 I0428 23:51:04.094706 7 log.go:172] (0xc001116aa0) (3) Data frame handling I0428 23:51:04.094741 7 log.go:172] (0xc001116aa0) (3) Data frame sent I0428 23:51:04.095787 7 log.go:172] (0xc0028358c0) Data frame received for 5 I0428 23:51:04.095807 7 log.go:172] (0xc001060960) (5) Data frame handling I0428 23:51:04.095844 7 log.go:172] (0xc0028358c0) Data frame received for 3 I0428 23:51:04.095882 7 log.go:172] (0xc001116aa0) (3) Data frame handling I0428 23:51:04.097883 7 log.go:172] (0xc0028358c0) Data frame received for 1 I0428 23:51:04.097925 7 log.go:172] (0xc000d83cc0) (1) Data frame handling I0428 23:51:04.097942 7 log.go:172] (0xc000d83cc0) (1) Data frame sent I0428 23:51:04.097964 7 log.go:172] (0xc0028358c0) (0xc000d83cc0) Stream removed, broadcasting: 1 I0428 23:51:04.097980 7 log.go:172] (0xc0028358c0) Go away received I0428 23:51:04.098094 7 log.go:172] (0xc0028358c0) (0xc000d83cc0) Stream removed, broadcasting: 1 I0428 23:51:04.098114 7 log.go:172] (0xc0028358c0) (0xc001116aa0) Stream removed, broadcasting: 3 I0428 23:51:04.098124 7 log.go:172] (0xc0028358c0) (0xc001060960) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 28 23:51:04.098: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2739 PodName:dns-2739 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:51:04.098: INFO: >>> kubeConfig: /root/.kube/config I0428 23:51:04.134628 7 log.go:172] (0xc0037d3600) (0xc000d75540) Create stream I0428 23:51:04.134650 7 log.go:172] (0xc0037d3600) (0xc000d75540) Stream added, broadcasting: 1 I0428 23:51:04.137536 7 log.go:172] (0xc0037d3600) Reply frame received for 1 I0428 23:51:04.137583 7 log.go:172] (0xc0037d3600) (0xc001060d20) Create stream I0428 23:51:04.137604 7 log.go:172] (0xc0037d3600) (0xc001060d20) Stream added, broadcasting: 3 I0428 23:51:04.138580 7 log.go:172] (0xc0037d3600) Reply frame received for 3 I0428 23:51:04.138613 7 log.go:172] (0xc0037d3600) (0xc001116b40) Create stream I0428 23:51:04.138624 7 log.go:172] (0xc0037d3600) (0xc001116b40) Stream added, broadcasting: 5 I0428 23:51:04.139521 7 log.go:172] (0xc0037d3600) Reply frame received for 5 I0428 23:51:04.204950 7 log.go:172] (0xc0037d3600) Data frame received for 3 I0428 23:51:04.204986 7 log.go:172] (0xc001060d20) (3) Data frame handling I0428 23:51:04.204997 7 log.go:172] (0xc001060d20) (3) Data frame sent I0428 23:51:04.206994 7 log.go:172] (0xc0037d3600) Data frame received for 3 I0428 23:51:04.207006 7 log.go:172] (0xc001060d20) (3) Data frame handling I0428 23:51:04.207253 7 log.go:172] (0xc0037d3600) Data frame received for 5 I0428 23:51:04.207269 7 log.go:172] (0xc001116b40) (5) Data frame handling I0428 23:51:04.208840 7 log.go:172] (0xc0037d3600) Data frame received for 1 I0428 23:51:04.208863 7 log.go:172] (0xc000d75540) (1) Data frame handling I0428 23:51:04.208879 7 log.go:172] (0xc000d75540) (1) Data frame sent I0428 23:51:04.208896 7 log.go:172] (0xc0037d3600) (0xc000d75540) Stream removed, broadcasting: 1 I0428 23:51:04.208910 7 log.go:172] (0xc0037d3600) Go away received I0428 23:51:04.209242 7 log.go:172] (0xc0037d3600) (0xc000d75540) Stream removed, broadcasting: 1 I0428 23:51:04.209293 7 log.go:172] (0xc0037d3600) (0xc001060d20) Stream removed, broadcasting: 3 I0428 23:51:04.209319 7 log.go:172] (0xc0037d3600) (0xc001116b40) Stream removed, broadcasting: 5 Apr 28 23:51:04.209: INFO: Deleting pod dns-2739... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:51:04.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2739" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":47,"skipped":911,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:51:04.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 28 23:51:04.579: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-a 3c16ec42-c3de-4ff9-abd3-9e6f2f2bb7cb 11840305 0 2020-04-28 23:51:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:51:04.580: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-a 3c16ec42-c3de-4ff9-abd3-9e6f2f2bb7cb 11840305 0 2020-04-28 23:51:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 28 23:51:14.588: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-a 3c16ec42-c3de-4ff9-abd3-9e6f2f2bb7cb 11840356 0 2020-04-28 23:51:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:51:14.588: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-a 3c16ec42-c3de-4ff9-abd3-9e6f2f2bb7cb 11840356 0 2020-04-28 23:51:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 28 23:51:24.597: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-a 3c16ec42-c3de-4ff9-abd3-9e6f2f2bb7cb 11840387 0 2020-04-28 23:51:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:51:24.597: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-a 3c16ec42-c3de-4ff9-abd3-9e6f2f2bb7cb 11840387 0 2020-04-28 23:51:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 28 23:51:34.605: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-a 3c16ec42-c3de-4ff9-abd3-9e6f2f2bb7cb 11840417 0 2020-04-28 23:51:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:51:34.605: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-a 3c16ec42-c3de-4ff9-abd3-9e6f2f2bb7cb 11840417 0 2020-04-28 23:51:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 28 23:51:44.612: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-b 2fb35f7a-498d-42d7-8064-b7a0a365ac4c 11840447 0 2020-04-28 23:51:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:51:44.612: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-b 2fb35f7a-498d-42d7-8064-b7a0a365ac4c 11840447 0 2020-04-28 23:51:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 28 23:51:54.620: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-b 2fb35f7a-498d-42d7-8064-b7a0a365ac4c 11840477 0 2020-04-28 23:51:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:51:54.620: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2297 /api/v1/namespaces/watch-2297/configmaps/e2e-watch-test-configmap-b 2fb35f7a-498d-42d7-8064-b7a0a365ac4c 11840477 0 2020-04-28 23:51:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:52:04.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2297" for this suite. • [SLOW TEST:60.350 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":48,"skipped":921,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:52:04.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 23:52:05.347: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 23:52:07.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714725, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714725, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714725, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714725, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 23:52:10.434: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:52:10.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9282" for this suite. STEP: Destroying namespace "webhook-9282-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.922 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":49,"skipped":953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:52:10.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-e2b4ede7-23ef-4e0c-9cde-9aea2180d787 STEP: Creating a pod to test consume configMaps Apr 28 23:52:10.631: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-568f5cb8-1d0f-48d0-a228-22cb9b9e09d1" in namespace "projected-2402" to be "Succeeded or Failed" Apr 28 23:52:10.658: INFO: Pod "pod-projected-configmaps-568f5cb8-1d0f-48d0-a228-22cb9b9e09d1": Phase="Pending", Reason="", readiness=false. Elapsed: 27.386848ms Apr 28 23:52:12.662: INFO: Pod "pod-projected-configmaps-568f5cb8-1d0f-48d0-a228-22cb9b9e09d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031599515s Apr 28 23:52:14.667: INFO: Pod "pod-projected-configmaps-568f5cb8-1d0f-48d0-a228-22cb9b9e09d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03595574s STEP: Saw pod success Apr 28 23:52:14.667: INFO: Pod "pod-projected-configmaps-568f5cb8-1d0f-48d0-a228-22cb9b9e09d1" satisfied condition "Succeeded or Failed" Apr 28 23:52:14.670: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-568f5cb8-1d0f-48d0-a228-22cb9b9e09d1 container projected-configmap-volume-test: STEP: delete the pod Apr 28 23:52:14.716: INFO: Waiting for pod pod-projected-configmaps-568f5cb8-1d0f-48d0-a228-22cb9b9e09d1 to disappear Apr 28 23:52:14.725: INFO: Pod pod-projected-configmaps-568f5cb8-1d0f-48d0-a228-22cb9b9e09d1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:52:14.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2402" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":979,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:52:14.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-15bf064c-d01e-44d2-ad2d-f665523ab1e4 STEP: Creating a pod to test consume configMaps Apr 28 23:52:14.851: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d5c5e4d-45f6-4569-9de1-94811f13b0f9" in namespace "projected-4667" to be "Succeeded or Failed" Apr 28 23:52:14.878: INFO: Pod "pod-projected-configmaps-1d5c5e4d-45f6-4569-9de1-94811f13b0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.205343ms Apr 28 23:52:16.916: INFO: Pod "pod-projected-configmaps-1d5c5e4d-45f6-4569-9de1-94811f13b0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065106539s Apr 28 23:52:18.920: INFO: Pod "pod-projected-configmaps-1d5c5e4d-45f6-4569-9de1-94811f13b0f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069255114s STEP: Saw pod success Apr 28 23:52:18.921: INFO: Pod "pod-projected-configmaps-1d5c5e4d-45f6-4569-9de1-94811f13b0f9" satisfied condition "Succeeded or Failed" Apr 28 23:52:18.924: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-1d5c5e4d-45f6-4569-9de1-94811f13b0f9 container projected-configmap-volume-test: STEP: delete the pod Apr 28 23:52:18.996: INFO: Waiting for pod pod-projected-configmaps-1d5c5e4d-45f6-4569-9de1-94811f13b0f9 to disappear Apr 28 23:52:19.012: INFO: Pod pod-projected-configmaps-1d5c5e4d-45f6-4569-9de1-94811f13b0f9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:52:19.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4667" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":992,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:52:19.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-8cd7d22f-dec5-46f4-804a-cd405b49b4dd STEP: Creating a pod to test consume secrets Apr 28 23:52:19.087: INFO: Waiting up to 5m0s for pod "pod-secrets-738f0a0b-1a2b-4f4b-9b8e-cce84cd05fe6" in namespace "secrets-3139" to be "Succeeded or Failed" Apr 28 23:52:19.090: INFO: Pod "pod-secrets-738f0a0b-1a2b-4f4b-9b8e-cce84cd05fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306706ms Apr 28 23:52:21.094: INFO: Pod "pod-secrets-738f0a0b-1a2b-4f4b-9b8e-cce84cd05fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007307803s Apr 28 23:52:23.099: INFO: Pod "pod-secrets-738f0a0b-1a2b-4f4b-9b8e-cce84cd05fe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01213837s STEP: Saw pod success Apr 28 23:52:23.099: INFO: Pod "pod-secrets-738f0a0b-1a2b-4f4b-9b8e-cce84cd05fe6" satisfied condition "Succeeded or Failed" Apr 28 23:52:23.103: INFO: Trying to get logs from node latest-worker pod pod-secrets-738f0a0b-1a2b-4f4b-9b8e-cce84cd05fe6 container secret-volume-test: STEP: delete the pod Apr 28 23:52:23.120: INFO: Waiting for pod pod-secrets-738f0a0b-1a2b-4f4b-9b8e-cce84cd05fe6 to disappear Apr 28 23:52:23.142: INFO: Pod pod-secrets-738f0a0b-1a2b-4f4b-9b8e-cce84cd05fe6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:52:23.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3139" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":1015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:52:23.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 28 23:52:23.236: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5447 /api/v1/namespaces/watch-5447/configmaps/e2e-watch-test-watch-closed 4ebb8539-134b-44a1-92a2-5bdcc12504b0 11840685 0 2020-04-28 23:52:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:52:23.236: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5447 /api/v1/namespaces/watch-5447/configmaps/e2e-watch-test-watch-closed 4ebb8539-134b-44a1-92a2-5bdcc12504b0 11840686 0 2020-04-28 23:52:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 28 23:52:23.253: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5447 /api/v1/namespaces/watch-5447/configmaps/e2e-watch-test-watch-closed 4ebb8539-134b-44a1-92a2-5bdcc12504b0 11840687 0 2020-04-28 23:52:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:52:23.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5447 /api/v1/namespaces/watch-5447/configmaps/e2e-watch-test-watch-closed 4ebb8539-134b-44a1-92a2-5bdcc12504b0 11840688 0 2020-04-28 23:52:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:52:23.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5447" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":53,"skipped":1046,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:52:23.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 28 23:52:23.319: INFO: >>> kubeConfig: /root/.kube/config Apr 28 23:52:25.235: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:52:35.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1091" for this suite. • [SLOW TEST:12.561 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":54,"skipped":1063,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:52:35.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-lftgw in namespace proxy-5483 I0428 23:52:35.936315 7 runners.go:190] Created replication controller with name: proxy-service-lftgw, namespace: proxy-5483, replica count: 1 I0428 23:52:36.986739 7 runners.go:190] proxy-service-lftgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 23:52:37.987017 7 runners.go:190] proxy-service-lftgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 23:52:38.987207 7 runners.go:190] proxy-service-lftgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 23:52:39.987424 7 runners.go:190] proxy-service-lftgw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 23:52:40.987657 7 runners.go:190] proxy-service-lftgw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 23:52:41.987859 7 runners.go:190] proxy-service-lftgw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 23:52:41.991: INFO: setup took 6.100916375s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 28 23:52:42.004: INFO: (0) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 12.181606ms) Apr 28 23:52:42.004: INFO: (0) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 12.270377ms) Apr 28 23:52:42.004: INFO: (0) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 12.483889ms) Apr 28 23:52:42.004: INFO: (0) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 12.457693ms) Apr 28 23:52:42.005: INFO: (0) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 13.936361ms) Apr 28 23:52:42.006: INFO: (0) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 14.984926ms) Apr 28 23:52:42.007: INFO: (0) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 15.282242ms) Apr 28 23:52:42.007: INFO: (0) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 15.328858ms) Apr 28 23:52:42.007: INFO: (0) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 15.741449ms) Apr 28 23:52:42.008: INFO: (0) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 16.539521ms) Apr 28 23:52:42.011: INFO: (0) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: ... (200; 4.124332ms) Apr 28 23:52:42.017: INFO: (1) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 5.148645ms) Apr 28 23:52:42.017: INFO: (1) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 5.034574ms) Apr 28 23:52:42.017: INFO: (1) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 5.236155ms) Apr 28 23:52:42.018: INFO: (1) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 5.327765ms) Apr 28 23:52:42.018: INFO: (1) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 5.297471ms) Apr 28 23:52:42.018: INFO: (1) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 6.165551ms) Apr 28 23:52:42.019: INFO: (1) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 2.470626ms) Apr 28 23:52:42.024: INFO: (2) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.523604ms) Apr 28 23:52:42.024: INFO: (2) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 4.762091ms) Apr 28 23:52:42.025: INFO: (2) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 5.3236ms) Apr 28 23:52:42.025: INFO: (2) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 5.495313ms) Apr 28 23:52:42.025: INFO: (2) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 6.279727ms) Apr 28 23:52:42.025: INFO: (2) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 6.106187ms) Apr 28 23:52:42.025: INFO: (2) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 6.205568ms) Apr 28 23:52:42.025: INFO: (2) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 6.183485ms) Apr 28 23:52:42.026: INFO: (2) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 6.145682ms) Apr 28 23:52:42.026: INFO: (2) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 6.512068ms) Apr 28 23:52:42.026: INFO: (2) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 6.279793ms) Apr 28 23:52:42.026: INFO: (2) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 6.231206ms) Apr 28 23:52:42.026: INFO: (2) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 3.562852ms) Apr 28 23:52:42.030: INFO: (3) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 3.45138ms) Apr 28 23:52:42.031: INFO: (3) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 4.318585ms) Apr 28 23:52:42.031: INFO: (3) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 3.923349ms) Apr 28 23:52:42.031: INFO: (3) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 4.519661ms) Apr 28 23:52:42.031: INFO: (3) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 4.469977ms) Apr 28 23:52:42.031: INFO: (3) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 4.258348ms) Apr 28 23:52:42.031: INFO: (3) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 4.960119ms) Apr 28 23:52:42.031: INFO: (3) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: ... (200; 4.64905ms) Apr 28 23:52:42.032: INFO: (3) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 5.938628ms) Apr 28 23:52:42.033: INFO: (3) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 5.967054ms) Apr 28 23:52:42.033: INFO: (3) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 6.228785ms) Apr 28 23:52:42.035: INFO: (4) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 1.994806ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 3.12266ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 3.59835ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 3.547378ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 3.721179ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 3.804894ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 3.616186ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: ... (200; 3.64628ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 3.820215ms) Apr 28 23:52:42.036: INFO: (4) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 3.769921ms) Apr 28 23:52:42.037: INFO: (4) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 4.247397ms) Apr 28 23:52:42.037: INFO: (4) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 4.575341ms) Apr 28 23:52:42.038: INFO: (4) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 4.800971ms) Apr 28 23:52:42.038: INFO: (4) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 4.912458ms) Apr 28 23:52:42.038: INFO: (4) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 5.110123ms) Apr 28 23:52:42.040: INFO: (5) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 2.301094ms) Apr 28 23:52:42.041: INFO: (5) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 4.05986ms) Apr 28 23:52:42.042: INFO: (5) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 4.545047ms) Apr 28 23:52:42.042: INFO: (5) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 4.46993ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.600751ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.885596ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 4.881108ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 4.95964ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 4.939932ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 4.982853ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 5.228759ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 5.251482ms) Apr 28 23:52:42.043: INFO: (5) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 5.465683ms) Apr 28 23:52:42.051: INFO: (6) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 7.425238ms) Apr 28 23:52:42.051: INFO: (6) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 7.637631ms) Apr 28 23:52:42.054: INFO: (6) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 10.378318ms) Apr 28 23:52:42.055: INFO: (6) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 10.9464ms) Apr 28 23:52:42.055: INFO: (6) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 10.988033ms) Apr 28 23:52:42.055: INFO: (6) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 11.008315ms) Apr 28 23:52:42.055: INFO: (6) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 10.985725ms) Apr 28 23:52:42.055: INFO: (6) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 11.124848ms) Apr 28 23:52:42.055: INFO: (6) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 11.81667ms) Apr 28 23:52:42.060: INFO: (7) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 4.201705ms) Apr 28 23:52:42.060: INFO: (7) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 4.561541ms) Apr 28 23:52:42.061: INFO: (7) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 5.318282ms) Apr 28 23:52:42.061: INFO: (7) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 5.32637ms) Apr 28 23:52:42.061: INFO: (7) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 5.354334ms) Apr 28 23:52:42.061: INFO: (7) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 5.423134ms) Apr 28 23:52:42.061: INFO: (7) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 5.559983ms) Apr 28 23:52:42.061: INFO: (7) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 5.551239ms) Apr 28 23:52:42.061: INFO: (7) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 6.081108ms) Apr 28 23:52:42.062: INFO: (7) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 6.393334ms) Apr 28 23:52:42.062: INFO: (7) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 6.605175ms) Apr 28 23:52:42.062: INFO: (7) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 6.798199ms) Apr 28 23:52:42.062: INFO: (7) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 6.970781ms) Apr 28 23:52:42.062: INFO: (7) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 6.965238ms) Apr 28 23:52:42.062: INFO: (7) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 6.963331ms) Apr 28 23:52:42.062: INFO: (7) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: ... (200; 2.661911ms) Apr 28 23:52:42.067: INFO: (8) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.692777ms) Apr 28 23:52:42.067: INFO: (8) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 4.607159ms) Apr 28 23:52:42.067: INFO: (8) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.695307ms) Apr 28 23:52:42.067: INFO: (8) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 4.866338ms) Apr 28 23:52:42.067: INFO: (8) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.936786ms) Apr 28 23:52:42.067: INFO: (8) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 4.984502ms) Apr 28 23:52:42.068: INFO: (8) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.999969ms) Apr 28 23:52:42.068: INFO: (8) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: ... (200; 4.555167ms) Apr 28 23:52:42.073: INFO: (9) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 4.602996ms) Apr 28 23:52:42.073: INFO: (9) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.706098ms) Apr 28 23:52:42.073: INFO: (9) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 4.707418ms) Apr 28 23:52:42.073: INFO: (9) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 4.701298ms) Apr 28 23:52:42.073: INFO: (9) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 4.84465ms) Apr 28 23:52:42.073: INFO: (9) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 4.773041ms) Apr 28 23:52:42.073: INFO: (9) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.840462ms) Apr 28 23:52:42.073: INFO: (9) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 3.823534ms) Apr 28 23:52:42.078: INFO: (10) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 3.80685ms) Apr 28 23:52:42.078: INFO: (10) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.183997ms) Apr 28 23:52:42.078: INFO: (10) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 4.104007ms) Apr 28 23:52:42.079: INFO: (10) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 4.504982ms) Apr 28 23:52:42.079: INFO: (10) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 4.712052ms) Apr 28 23:52:42.079: INFO: (10) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 4.744581ms) Apr 28 23:52:42.079: INFO: (10) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 4.770144ms) Apr 28 23:52:42.079: INFO: (10) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 5.064078ms) Apr 28 23:52:42.079: INFO: (10) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 4.996708ms) Apr 28 23:52:42.079: INFO: (10) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 5.140182ms) Apr 28 23:52:42.081: INFO: (11) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 1.97408ms) Apr 28 23:52:42.083: INFO: (11) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.247315ms) Apr 28 23:52:42.084: INFO: (11) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 4.58227ms) Apr 28 23:52:42.084: INFO: (11) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.722022ms) Apr 28 23:52:42.084: INFO: (11) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.787762ms) Apr 28 23:52:42.084: INFO: (11) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.975164ms) Apr 28 23:52:42.084: INFO: (11) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 4.892368ms) Apr 28 23:52:42.085: INFO: (11) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 5.27227ms) Apr 28 23:52:42.085: INFO: (11) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 5.405395ms) Apr 28 23:52:42.085: INFO: (11) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test<... (200; 5.415818ms) Apr 28 23:52:42.085: INFO: (11) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 5.714658ms) Apr 28 23:52:42.085: INFO: (11) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 5.720208ms) Apr 28 23:52:42.085: INFO: (11) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 5.751955ms) Apr 28 23:52:42.085: INFO: (11) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 5.748446ms) Apr 28 23:52:42.085: INFO: (11) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 5.881402ms) Apr 28 23:52:42.088: INFO: (12) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 2.800004ms) Apr 28 23:52:42.088: INFO: (12) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 3.164406ms) Apr 28 23:52:42.089: INFO: (12) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 3.086688ms) Apr 28 23:52:42.089: INFO: (12) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 3.536474ms) Apr 28 23:52:42.089: INFO: (12) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 3.630359ms) Apr 28 23:52:42.089: INFO: (12) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test<... (200; 2.109174ms) Apr 28 23:52:42.092: INFO: (13) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 2.223874ms) Apr 28 23:52:42.093: INFO: (13) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 2.331846ms) Apr 28 23:52:42.093: INFO: (13) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 2.490635ms) Apr 28 23:52:42.094: INFO: (13) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 4.207207ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 4.758286ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 4.889576ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 4.738503ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 4.778629ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 4.824571ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.806423ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 4.953698ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.961795ms) Apr 28 23:52:42.095: INFO: (13) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 4.961272ms) Apr 28 23:52:42.097: INFO: (14) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 2.046192ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 4.526297ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 4.508491ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 4.478772ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.462206ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 4.479258ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.770064ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 5.159988ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 5.147994ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 5.222532ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 5.194193ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 5.262395ms) Apr 28 23:52:42.100: INFO: (14) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 5.306272ms) Apr 28 23:52:42.106: INFO: (15) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.802142ms) Apr 28 23:52:42.108: INFO: (15) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 7.29689ms) Apr 28 23:52:42.108: INFO: (15) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 7.274113ms) Apr 28 23:52:42.108: INFO: (15) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 7.60983ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 10.281228ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 10.280193ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 10.334394ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 10.417396ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 10.488388ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 10.583391ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 10.739459ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 10.771409ms) Apr 28 23:52:42.111: INFO: (15) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 10.764314ms) Apr 28 23:52:42.112: INFO: (15) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 11.175907ms) Apr 28 23:52:42.112: INFO: (15) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 11.574311ms) Apr 28 23:52:42.117: INFO: (16) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.77154ms) Apr 28 23:52:42.117: INFO: (16) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.74991ms) Apr 28 23:52:42.117: INFO: (16) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.782212ms) Apr 28 23:52:42.117: INFO: (16) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 4.745353ms) Apr 28 23:52:42.117: INFO: (16) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 4.865063ms) Apr 28 23:52:42.117: INFO: (16) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 4.815957ms) Apr 28 23:52:42.117: INFO: (16) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 4.958437ms) Apr 28 23:52:42.118: INFO: (16) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 6.014579ms) Apr 28 23:52:42.119: INFO: (16) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 6.120424ms) Apr 28 23:52:42.119: INFO: (16) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 6.089916ms) Apr 28 23:52:42.119: INFO: (16) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 6.151024ms) Apr 28 23:52:42.119: INFO: (16) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 6.179821ms) Apr 28 23:52:42.119: INFO: (16) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 6.253362ms) Apr 28 23:52:42.119: INFO: (16) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 6.258363ms) Apr 28 23:52:42.119: INFO: (16) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test<... (200; 2.742358ms) Apr 28 23:52:42.123: INFO: (17) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 3.343403ms) Apr 28 23:52:42.123: INFO: (17) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 3.699516ms) Apr 28 23:52:42.123: INFO: (17) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:162/proxy/: bar (200; 3.28857ms) Apr 28 23:52:42.123: INFO: (17) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h/proxy/: test (200; 3.478902ms) Apr 28 23:52:42.123: INFO: (17) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 3.770979ms) Apr 28 23:52:42.123: INFO: (17) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test<... (200; 3.450391ms) Apr 28 23:52:42.127: INFO: (18) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 3.720598ms) Apr 28 23:52:42.128: INFO: (18) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 3.783666ms) Apr 28 23:52:42.128: INFO: (18) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname2/proxy/: bar (200; 4.024013ms) Apr 28 23:52:42.128: INFO: (18) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:460/proxy/: tls baz (200; 4.017463ms) Apr 28 23:52:42.128: INFO: (18) /api/v1/namespaces/proxy-5483/services/proxy-service-lftgw:portname1/proxy/: foo (200; 4.55791ms) Apr 28 23:52:42.128: INFO: (18) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 4.448361ms) Apr 28 23:52:42.128: INFO: (18) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname1/proxy/: tls baz (200; 4.611742ms) Apr 28 23:52:42.129: INFO: (18) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 4.591828ms) Apr 28 23:52:42.129: INFO: (18) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 4.89978ms) Apr 28 23:52:42.129: INFO: (18) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 5.212379ms) Apr 28 23:52:42.129: INFO: (18) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname1/proxy/: foo (200; 5.239638ms) Apr 28 23:52:42.133: INFO: (19) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:462/proxy/: tls qux (200; 3.76693ms) Apr 28 23:52:42.134: INFO: (19) /api/v1/namespaces/proxy-5483/services/http:proxy-service-lftgw:portname2/proxy/: bar (200; 5.177316ms) Apr 28 23:52:42.134: INFO: (19) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:1080/proxy/: test<... (200; 5.180512ms) Apr 28 23:52:42.135: INFO: (19) /api/v1/namespaces/proxy-5483/pods/http:proxy-service-lftgw-pgw6h:1080/proxy/: ... (200; 6.266324ms) Apr 28 23:52:42.135: INFO: (19) /api/v1/namespaces/proxy-5483/services/https:proxy-service-lftgw:tlsportname2/proxy/: tls qux (200; 6.267054ms) Apr 28 23:52:42.136: INFO: (19) /api/v1/namespaces/proxy-5483/pods/https:proxy-service-lftgw-pgw6h:443/proxy/: test (200; 6.38855ms) Apr 28 23:52:42.136: INFO: (19) /api/v1/namespaces/proxy-5483/pods/proxy-service-lftgw-pgw6h:160/proxy/: foo (200; 6.392161ms) STEP: deleting ReplicationController proxy-service-lftgw in namespace proxy-5483, will wait for the garbage collector to delete the pods Apr 28 23:52:42.192: INFO: Deleting ReplicationController proxy-service-lftgw took: 4.593117ms Apr 28 23:52:42.492: INFO: Terminating ReplicationController proxy-service-lftgw pods took: 300.234093ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:52:45.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5483" for this suite. • [SLOW TEST:9.381 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":55,"skipped":1066,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:52:45.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-159 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 23:52:45.252: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 28 23:52:45.336: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 23:52:47.340: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 28 23:52:49.342: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 23:52:51.340: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 23:52:53.339: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 23:52:55.348: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 23:52:57.369: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 23:52:59.340: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 28 23:53:01.339: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 28 23:53:01.345: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 28 23:53:05.362: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.48:8080/dial?request=hostname&protocol=http&host=10.244.2.38&port=8080&tries=1'] Namespace:pod-network-test-159 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:53:05.362: INFO: >>> kubeConfig: /root/.kube/config I0428 23:53:05.393461 7 log.go:172] (0xc0053382c0) (0xc001a8c6e0) Create stream I0428 23:53:05.393496 7 log.go:172] (0xc0053382c0) (0xc001a8c6e0) Stream added, broadcasting: 1 I0428 23:53:05.395465 7 log.go:172] (0xc0053382c0) Reply frame received for 1 I0428 23:53:05.395523 7 log.go:172] (0xc0053382c0) (0xc000315cc0) Create stream I0428 23:53:05.395547 7 log.go:172] (0xc0053382c0) (0xc000315cc0) Stream added, broadcasting: 3 I0428 23:53:05.396745 7 log.go:172] (0xc0053382c0) Reply frame received for 3 I0428 23:53:05.396817 7 log.go:172] (0xc0053382c0) (0xc000315f40) Create stream I0428 23:53:05.396852 7 log.go:172] (0xc0053382c0) (0xc000315f40) Stream added, broadcasting: 5 I0428 23:53:05.398209 7 log.go:172] (0xc0053382c0) Reply frame received for 5 I0428 23:53:05.464801 7 log.go:172] (0xc0053382c0) Data frame received for 5 I0428 23:53:05.464846 7 log.go:172] (0xc000315f40) (5) Data frame handling I0428 23:53:05.464880 7 log.go:172] (0xc0053382c0) Data frame received for 3 I0428 23:53:05.464899 7 log.go:172] (0xc000315cc0) (3) Data frame handling I0428 23:53:05.464916 7 log.go:172] (0xc000315cc0) (3) Data frame sent I0428 23:53:05.464927 7 log.go:172] (0xc0053382c0) Data frame received for 3 I0428 23:53:05.464937 7 log.go:172] (0xc000315cc0) (3) Data frame handling I0428 23:53:05.466664 7 log.go:172] (0xc0053382c0) Data frame received for 1 I0428 23:53:05.466701 7 log.go:172] (0xc001a8c6e0) (1) Data frame handling I0428 23:53:05.466755 7 log.go:172] (0xc001a8c6e0) (1) Data frame sent I0428 23:53:05.466788 7 log.go:172] (0xc0053382c0) (0xc001a8c6e0) Stream removed, broadcasting: 1 I0428 23:53:05.466830 7 log.go:172] (0xc0053382c0) Go away received I0428 23:53:05.466910 7 log.go:172] (0xc0053382c0) (0xc001a8c6e0) Stream removed, broadcasting: 1 I0428 23:53:05.466935 7 log.go:172] (0xc0053382c0) (0xc000315cc0) Stream removed, broadcasting: 3 I0428 23:53:05.466952 7 log.go:172] (0xc0053382c0) (0xc000315f40) Stream removed, broadcasting: 5 Apr 28 23:53:05.467: INFO: Waiting for responses: map[] Apr 28 23:53:05.470: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.48:8080/dial?request=hostname&protocol=http&host=10.244.1.47&port=8080&tries=1'] Namespace:pod-network-test-159 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 23:53:05.470: INFO: >>> kubeConfig: /root/.kube/config I0428 23:53:05.510140 7 log.go:172] (0xc005468420) (0xc001552500) Create stream I0428 23:53:05.510178 7 log.go:172] (0xc005468420) (0xc001552500) Stream added, broadcasting: 1 I0428 23:53:05.512785 7 log.go:172] (0xc005468420) Reply frame received for 1 I0428 23:53:05.512841 7 log.go:172] (0xc005468420) (0xc000b7b680) Create stream I0428 23:53:05.512870 7 log.go:172] (0xc005468420) (0xc000b7b680) Stream added, broadcasting: 3 I0428 23:53:05.515022 7 log.go:172] (0xc005468420) Reply frame received for 3 I0428 23:53:05.515144 7 log.go:172] (0xc005468420) (0xc001552640) Create stream I0428 23:53:05.515189 7 log.go:172] (0xc005468420) (0xc001552640) Stream added, broadcasting: 5 I0428 23:53:05.520717 7 log.go:172] (0xc005468420) Reply frame received for 5 I0428 23:53:05.579432 7 log.go:172] (0xc005468420) Data frame received for 3 I0428 23:53:05.579453 7 log.go:172] (0xc000b7b680) (3) Data frame handling I0428 23:53:05.579465 7 log.go:172] (0xc000b7b680) (3) Data frame sent I0428 23:53:05.579935 7 log.go:172] (0xc005468420) Data frame received for 3 I0428 23:53:05.579961 7 log.go:172] (0xc000b7b680) (3) Data frame handling I0428 23:53:05.581032 7 log.go:172] (0xc005468420) Data frame received for 5 I0428 23:53:05.581071 7 log.go:172] (0xc001552640) (5) Data frame handling I0428 23:53:05.582001 7 log.go:172] (0xc005468420) Data frame received for 1 I0428 23:53:05.582040 7 log.go:172] (0xc001552500) (1) Data frame handling I0428 23:53:05.582070 7 log.go:172] (0xc001552500) (1) Data frame sent I0428 23:53:05.582096 7 log.go:172] (0xc005468420) (0xc001552500) Stream removed, broadcasting: 1 I0428 23:53:05.582122 7 log.go:172] (0xc005468420) Go away received I0428 23:53:05.582183 7 log.go:172] (0xc005468420) (0xc001552500) Stream removed, broadcasting: 1 I0428 23:53:05.582208 7 log.go:172] (0xc005468420) (0xc000b7b680) Stream removed, broadcasting: 3 I0428 23:53:05.582220 7 log.go:172] (0xc005468420) (0xc001552640) Stream removed, broadcasting: 5 Apr 28 23:53:05.582: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:05.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-159" for this suite. • [SLOW TEST:20.389 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":1068,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:05.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 28 23:53:06.423: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 28 23:53:08.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714786, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714786, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714786, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714786, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 23:53:11.498: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:53:11.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:12.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1053" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.339 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":57,"skipped":1089,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:12.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 28 23:53:12.979: INFO: Waiting up to 5m0s for pod "pod-066c408f-d898-49d7-ada2-d7e66b1cceab" in namespace "emptydir-7372" to be "Succeeded or Failed" Apr 28 23:53:12.983: INFO: Pod "pod-066c408f-d898-49d7-ada2-d7e66b1cceab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.951729ms Apr 28 23:53:14.986: INFO: Pod "pod-066c408f-d898-49d7-ada2-d7e66b1cceab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00707455s Apr 28 23:53:16.990: INFO: Pod "pod-066c408f-d898-49d7-ada2-d7e66b1cceab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010950781s STEP: Saw pod success Apr 28 23:53:16.990: INFO: Pod "pod-066c408f-d898-49d7-ada2-d7e66b1cceab" satisfied condition "Succeeded or Failed" Apr 28 23:53:16.993: INFO: Trying to get logs from node latest-worker pod pod-066c408f-d898-49d7-ada2-d7e66b1cceab container test-container: STEP: delete the pod Apr 28 23:53:17.023: INFO: Waiting for pod pod-066c408f-d898-49d7-ada2-d7e66b1cceab to disappear Apr 28 23:53:17.055: INFO: Pod pod-066c408f-d898-49d7-ada2-d7e66b1cceab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:17.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7372" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":1105,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:17.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:53:17.107: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 28 23:53:20.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1054 create -f -' Apr 28 23:53:23.413: INFO: stderr: "" Apr 28 23:53:23.413: INFO: stdout: "e2e-test-crd-publish-openapi-7512-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 28 23:53:23.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1054 delete e2e-test-crd-publish-openapi-7512-crds test-foo' Apr 28 23:53:23.522: INFO: stderr: "" Apr 28 23:53:23.523: INFO: stdout: "e2e-test-crd-publish-openapi-7512-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 28 23:53:23.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1054 apply -f -' Apr 28 23:53:23.797: INFO: stderr: "" Apr 28 23:53:23.797: INFO: stdout: "e2e-test-crd-publish-openapi-7512-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 28 23:53:23.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1054 delete e2e-test-crd-publish-openapi-7512-crds test-foo' Apr 28 23:53:23.893: INFO: stderr: "" Apr 28 23:53:23.893: INFO: stdout: "e2e-test-crd-publish-openapi-7512-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 28 23:53:23.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1054 create -f -' Apr 28 23:53:24.127: INFO: rc: 1 Apr 28 23:53:24.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1054 apply -f -' Apr 28 23:53:24.373: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 28 23:53:24.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1054 create -f -' Apr 28 23:53:24.595: INFO: rc: 1 Apr 28 23:53:24.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1054 apply -f -' Apr 28 23:53:24.820: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 28 23:53:24.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7512-crds' Apr 28 23:53:25.048: INFO: stderr: "" Apr 28 23:53:25.048: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7512-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 28 23:53:25.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7512-crds.metadata' Apr 28 23:53:25.293: INFO: stderr: "" Apr 28 23:53:25.293: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7512-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 28 23:53:25.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7512-crds.spec' Apr 28 23:53:25.523: INFO: stderr: "" Apr 28 23:53:25.523: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7512-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 28 23:53:25.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7512-crds.spec.bars' Apr 28 23:53:25.773: INFO: stderr: "" Apr 28 23:53:25.773: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7512-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 28 23:53:25.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7512-crds.spec.bars2' Apr 28 23:53:26.020: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:27.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1054" for this suite. • [SLOW TEST:10.861 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":59,"skipped":1111,"failed":0} [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:27.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:53:27.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 28 23:53:28.144: INFO: stderr: "" Apr 28 23:53:28.144: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:28.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6391" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":60,"skipped":1111,"failed":0} ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:28.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 28 23:53:28.305: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-54 /api/v1/namespaces/watch-54/configmaps/e2e-watch-test-resource-version 3ec1517f-1060-45d3-98e0-8aff61c85715 11841146 0 2020-04-28 23:53:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 23:53:28.306: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-54 /api/v1/namespaces/watch-54/configmaps/e2e-watch-test-resource-version 3ec1517f-1060-45d3-98e0-8aff61c85715 11841147 0 2020-04-28 23:53:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:28.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-54" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":61,"skipped":1111,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:28.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 28 23:53:28.391: INFO: Waiting up to 5m0s for pod "pod-67a113b4-3ad2-474e-b226-bb9d38b44983" in namespace "emptydir-3648" to be "Succeeded or Failed" Apr 28 23:53:28.397: INFO: Pod "pod-67a113b4-3ad2-474e-b226-bb9d38b44983": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516057ms Apr 28 23:53:30.445: INFO: Pod "pod-67a113b4-3ad2-474e-b226-bb9d38b44983": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054804377s Apr 28 23:53:32.449: INFO: Pod "pod-67a113b4-3ad2-474e-b226-bb9d38b44983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058932034s STEP: Saw pod success Apr 28 23:53:32.450: INFO: Pod "pod-67a113b4-3ad2-474e-b226-bb9d38b44983" satisfied condition "Succeeded or Failed" Apr 28 23:53:32.453: INFO: Trying to get logs from node latest-worker pod pod-67a113b4-3ad2-474e-b226-bb9d38b44983 container test-container: STEP: delete the pod Apr 28 23:53:32.476: INFO: Waiting for pod pod-67a113b4-3ad2-474e-b226-bb9d38b44983 to disappear Apr 28 23:53:32.494: INFO: Pod pod-67a113b4-3ad2-474e-b226-bb9d38b44983 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:32.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3648" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1112,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:32.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 28 23:53:37.130: INFO: Successfully updated pod "labelsupdate62912524-bc1e-4503-9ba9-c9ec649de604" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:39.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2111" for this suite. • [SLOW TEST:6.655 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1126,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:39.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:53:39.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2801" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":64,"skipped":1141,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:53:39.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8194.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8194.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8194.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8194.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 23:53:43.555: INFO: DNS probes using dns-test-4af924ec-15ab-4fce-806c-73138aacbe02 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8194.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8194.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8194.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8194.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 23:53:51.687: INFO: File wheezy_udp@dns-test-service-3.dns-8194.svc.cluster.local from pod dns-8194/dns-test-e9ecf636-4977-469b-83ee-a871c576de93 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 23:53:51.691: INFO: Lookups using dns-8194/dns-test-e9ecf636-4977-469b-83ee-a871c576de93 failed for: [wheezy_udp@dns-test-service-3.dns-8194.svc.cluster.local] Apr 28 23:53:56.700: INFO: DNS probes using dns-test-e9ecf636-4977-469b-83ee-a871c576de93 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8194.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8194.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8194.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8194.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 23:54:03.267: INFO: DNS probes using dns-test-09973701-4272-4fdc-b76f-b27733244948 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:03.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8194" for this suite. • [SLOW TEST:24.177 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":65,"skipped":1150,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:03.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:09.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5216" for this suite. STEP: Destroying namespace "nsdeletetest-2478" for this suite. Apr 28 23:54:10.002: INFO: Namespace nsdeletetest-2478 was already deleted STEP: Destroying namespace "nsdeletetest-8143" for this suite. • [SLOW TEST:6.518 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":66,"skipped":1156,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:10.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-db6efc5c-b538-4235-ba79-cce987081b9f STEP: Creating a pod to test consume secrets Apr 28 23:54:10.071: INFO: Waiting up to 5m0s for pod "pod-secrets-8c8d093a-69b1-4949-b795-51f010ea6422" in namespace "secrets-8914" to be "Succeeded or Failed" Apr 28 23:54:10.075: INFO: Pod "pod-secrets-8c8d093a-69b1-4949-b795-51f010ea6422": Phase="Pending", Reason="", readiness=false. Elapsed: 3.889863ms Apr 28 23:54:12.081: INFO: Pod "pod-secrets-8c8d093a-69b1-4949-b795-51f010ea6422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010613042s Apr 28 23:54:14.086: INFO: Pod "pod-secrets-8c8d093a-69b1-4949-b795-51f010ea6422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014931043s STEP: Saw pod success Apr 28 23:54:14.086: INFO: Pod "pod-secrets-8c8d093a-69b1-4949-b795-51f010ea6422" satisfied condition "Succeeded or Failed" Apr 28 23:54:14.089: INFO: Trying to get logs from node latest-worker pod pod-secrets-8c8d093a-69b1-4949-b795-51f010ea6422 container secret-volume-test: STEP: delete the pod Apr 28 23:54:14.112: INFO: Waiting for pod pod-secrets-8c8d093a-69b1-4949-b795-51f010ea6422 to disappear Apr 28 23:54:14.116: INFO: Pod pod-secrets-8c8d093a-69b1-4949-b795-51f010ea6422 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:14.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8914" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1160,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:14.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-674d2b75-887c-489d-8d31-2f817139ab7b STEP: Creating a pod to test consume configMaps Apr 28 23:54:14.199: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e7f4545-a665-4f7d-93d7-51d660e6ed8a" in namespace "projected-1453" to be "Succeeded or Failed" Apr 28 23:54:14.216: INFO: Pod "pod-projected-configmaps-5e7f4545-a665-4f7d-93d7-51d660e6ed8a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.81815ms Apr 28 23:54:16.220: INFO: Pod "pod-projected-configmaps-5e7f4545-a665-4f7d-93d7-51d660e6ed8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021605579s Apr 28 23:54:18.225: INFO: Pod "pod-projected-configmaps-5e7f4545-a665-4f7d-93d7-51d660e6ed8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025853899s STEP: Saw pod success Apr 28 23:54:18.225: INFO: Pod "pod-projected-configmaps-5e7f4545-a665-4f7d-93d7-51d660e6ed8a" satisfied condition "Succeeded or Failed" Apr 28 23:54:18.228: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-5e7f4545-a665-4f7d-93d7-51d660e6ed8a container projected-configmap-volume-test: STEP: delete the pod Apr 28 23:54:18.245: INFO: Waiting for pod pod-projected-configmaps-5e7f4545-a665-4f7d-93d7-51d660e6ed8a to disappear Apr 28 23:54:18.256: INFO: Pod pod-projected-configmaps-5e7f4545-a665-4f7d-93d7-51d660e6ed8a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:18.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1453" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1174,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:18.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 28 23:54:18.375: INFO: Waiting up to 5m0s for pod "pod-b1ee5328-a3ad-47cc-98c1-0ae1ab2a9b3d" in namespace "emptydir-6803" to be "Succeeded or Failed" Apr 28 23:54:18.388: INFO: Pod "pod-b1ee5328-a3ad-47cc-98c1-0ae1ab2a9b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.154455ms Apr 28 23:54:20.403: INFO: Pod "pod-b1ee5328-a3ad-47cc-98c1-0ae1ab2a9b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027336058s Apr 28 23:54:22.407: INFO: Pod "pod-b1ee5328-a3ad-47cc-98c1-0ae1ab2a9b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031481142s STEP: Saw pod success Apr 28 23:54:22.407: INFO: Pod "pod-b1ee5328-a3ad-47cc-98c1-0ae1ab2a9b3d" satisfied condition "Succeeded or Failed" Apr 28 23:54:22.410: INFO: Trying to get logs from node latest-worker pod pod-b1ee5328-a3ad-47cc-98c1-0ae1ab2a9b3d container test-container: STEP: delete the pod Apr 28 23:54:22.446: INFO: Waiting for pod pod-b1ee5328-a3ad-47cc-98c1-0ae1ab2a9b3d to disappear Apr 28 23:54:22.460: INFO: Pod pod-b1ee5328-a3ad-47cc-98c1-0ae1ab2a9b3d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:22.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6803" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1181,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:22.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 23:54:22.588: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d8edc3a-8e23-489f-a991-6aeb7caf8b03" in namespace "downward-api-4206" to be "Succeeded or Failed" Apr 28 23:54:22.592: INFO: Pod "downwardapi-volume-2d8edc3a-8e23-489f-a991-6aeb7caf8b03": Phase="Pending", Reason="", readiness=false. Elapsed: 3.685342ms Apr 28 23:54:24.595: INFO: Pod "downwardapi-volume-2d8edc3a-8e23-489f-a991-6aeb7caf8b03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0071626s Apr 28 23:54:26.600: INFO: Pod "downwardapi-volume-2d8edc3a-8e23-489f-a991-6aeb7caf8b03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012167963s STEP: Saw pod success Apr 28 23:54:26.600: INFO: Pod "downwardapi-volume-2d8edc3a-8e23-489f-a991-6aeb7caf8b03" satisfied condition "Succeeded or Failed" Apr 28 23:54:26.603: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2d8edc3a-8e23-489f-a991-6aeb7caf8b03 container client-container: STEP: delete the pod Apr 28 23:54:26.650: INFO: Waiting for pod downwardapi-volume-2d8edc3a-8e23-489f-a991-6aeb7caf8b03 to disappear Apr 28 23:54:26.662: INFO: Pod downwardapi-volume-2d8edc3a-8e23-489f-a991-6aeb7caf8b03 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:26.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4206" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1189,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:26.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-3268/configmap-test-3af4b6fc-9512-4137-9090-4257b6e650ad STEP: Creating a pod to test consume configMaps Apr 28 23:54:26.780: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d4c6b60-4615-48a9-87e9-6d2982758d49" in namespace "configmap-3268" to be "Succeeded or Failed" Apr 28 23:54:26.783: INFO: Pod "pod-configmaps-1d4c6b60-4615-48a9-87e9-6d2982758d49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477829ms Apr 28 23:54:28.810: INFO: Pod "pod-configmaps-1d4c6b60-4615-48a9-87e9-6d2982758d49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030294457s Apr 28 23:54:30.814: INFO: Pod "pod-configmaps-1d4c6b60-4615-48a9-87e9-6d2982758d49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033953938s STEP: Saw pod success Apr 28 23:54:30.814: INFO: Pod "pod-configmaps-1d4c6b60-4615-48a9-87e9-6d2982758d49" satisfied condition "Succeeded or Failed" Apr 28 23:54:30.817: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1d4c6b60-4615-48a9-87e9-6d2982758d49 container env-test: STEP: delete the pod Apr 28 23:54:30.848: INFO: Waiting for pod pod-configmaps-1d4c6b60-4615-48a9-87e9-6d2982758d49 to disappear Apr 28 23:54:30.860: INFO: Pod pod-configmaps-1d4c6b60-4615-48a9-87e9-6d2982758d49 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:30.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3268" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1208,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:30.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-0660f1e2-5620-467a-9067-2e82adf23dff [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:30.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9247" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":72,"skipped":1221,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:31.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-1199 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1199 STEP: Deleting pre-stop pod Apr 28 23:54:44.147: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:54:44.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1199" for this suite. • [SLOW TEST:13.220 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":73,"skipped":1230,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:54:44.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 28 23:54:44.279: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:55:00.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3865" for this suite. • [SLOW TEST:15.838 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":74,"skipped":1230,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:55:00.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:55:00.099: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 28 23:55:03.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2375 create -f -' Apr 28 23:55:06.046: INFO: stderr: "" Apr 28 23:55:06.046: INFO: stdout: "e2e-test-crd-publish-openapi-432-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 28 23:55:06.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2375 delete e2e-test-crd-publish-openapi-432-crds test-cr' Apr 28 23:55:06.146: INFO: stderr: "" Apr 28 23:55:06.146: INFO: stdout: "e2e-test-crd-publish-openapi-432-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 28 23:55:06.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2375 apply -f -' Apr 28 23:55:06.422: INFO: stderr: "" Apr 28 23:55:06.422: INFO: stdout: "e2e-test-crd-publish-openapi-432-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 28 23:55:06.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2375 delete e2e-test-crd-publish-openapi-432-crds test-cr' Apr 28 23:55:06.564: INFO: stderr: "" Apr 28 23:55:06.564: INFO: stdout: "e2e-test-crd-publish-openapi-432-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 28 23:55:06.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-432-crds' Apr 28 23:55:06.804: INFO: stderr: "" Apr 28 23:55:06.804: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-432-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:55:09.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2375" for this suite. • [SLOW TEST:9.678 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":75,"skipped":1235,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:55:09.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 28 23:55:17.881: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 23:55:17.905: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 23:55:19.905: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 23:55:19.909: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 23:55:21.905: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 23:55:21.909: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:55:21.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1179" for this suite. • [SLOW TEST:12.184 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1242,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:55:21.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:55:38.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5725" for this suite. • [SLOW TEST:16.191 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":77,"skipped":1244,"failed":0} [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:55:38.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:55:43.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6547" for this suite. • [SLOW TEST:5.148 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":78,"skipped":1244,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:55:43.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:56:14.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8231" for this suite. STEP: Destroying namespace "nsdeletetest-9034" for this suite. Apr 28 23:56:14.654: INFO: Namespace nsdeletetest-9034 was already deleted STEP: Destroying namespace "nsdeletetest-5337" for this suite. • [SLOW TEST:31.394 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":79,"skipped":1245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:56:14.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 23:56:15.770: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 23:56:17.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714975, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714975, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714975, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714975, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 23:56:20.792: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:56:20.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7225-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:56:22.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8975" for this suite. STEP: Destroying namespace "webhook-8975-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.518 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":80,"skipped":1271,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:56:22.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 28 23:56:22.271: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e36dab2-59e1-47fa-b7cf-3f8b684b4094" in namespace "downward-api-7447" to be "Succeeded or Failed" Apr 28 23:56:22.275: INFO: Pod "downwardapi-volume-4e36dab2-59e1-47fa-b7cf-3f8b684b4094": Phase="Pending", Reason="", readiness=false. Elapsed: 3.694856ms Apr 28 23:56:24.279: INFO: Pod "downwardapi-volume-4e36dab2-59e1-47fa-b7cf-3f8b684b4094": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007964269s Apr 28 23:56:26.283: INFO: Pod "downwardapi-volume-4e36dab2-59e1-47fa-b7cf-3f8b684b4094": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011643882s STEP: Saw pod success Apr 28 23:56:26.283: INFO: Pod "downwardapi-volume-4e36dab2-59e1-47fa-b7cf-3f8b684b4094" satisfied condition "Succeeded or Failed" Apr 28 23:56:26.285: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4e36dab2-59e1-47fa-b7cf-3f8b684b4094 container client-container: STEP: delete the pod Apr 28 23:56:26.301: INFO: Waiting for pod downwardapi-volume-4e36dab2-59e1-47fa-b7cf-3f8b684b4094 to disappear Apr 28 23:56:26.319: INFO: Pod downwardapi-volume-4e36dab2-59e1-47fa-b7cf-3f8b684b4094 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:56:26.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7447" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:56:26.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-9f01e6c8-d886-4c53-a25e-4f17b2df24a2 STEP: Creating a pod to test consume secrets Apr 28 23:56:26.409: INFO: Waiting up to 5m0s for pod "pod-secrets-d99499ed-3dd2-419f-ae4b-0c6441b8e1a9" in namespace "secrets-1298" to be "Succeeded or Failed" Apr 28 23:56:26.419: INFO: Pod "pod-secrets-d99499ed-3dd2-419f-ae4b-0c6441b8e1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.939265ms Apr 28 23:56:28.560: INFO: Pod "pod-secrets-d99499ed-3dd2-419f-ae4b-0c6441b8e1a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151163903s Apr 28 23:56:30.565: INFO: Pod "pod-secrets-d99499ed-3dd2-419f-ae4b-0c6441b8e1a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155295052s STEP: Saw pod success Apr 28 23:56:30.565: INFO: Pod "pod-secrets-d99499ed-3dd2-419f-ae4b-0c6441b8e1a9" satisfied condition "Succeeded or Failed" Apr 28 23:56:30.568: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-d99499ed-3dd2-419f-ae4b-0c6441b8e1a9 container secret-volume-test: STEP: delete the pod Apr 28 23:56:30.642: INFO: Waiting for pod pod-secrets-d99499ed-3dd2-419f-ae4b-0c6441b8e1a9 to disappear Apr 28 23:56:30.659: INFO: Pod pod-secrets-d99499ed-3dd2-419f-ae4b-0c6441b8e1a9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:56:30.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1298" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1304,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:56:30.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-affd29cc-1298-49d8-b704-9bef81ddccd6 STEP: Creating secret with name secret-projected-all-test-volume-19071cd9-a0c8-418e-90c3-89b989ee4aaf STEP: Creating a pod to test Check all projections for projected volume plugin Apr 28 23:56:30.792: INFO: Waiting up to 5m0s for pod "projected-volume-cffda0f4-451f-4e97-a9d2-b34402324d5e" in namespace "projected-8876" to be "Succeeded or Failed" Apr 28 23:56:30.810: INFO: Pod "projected-volume-cffda0f4-451f-4e97-a9d2-b34402324d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.712101ms Apr 28 23:56:32.830: INFO: Pod "projected-volume-cffda0f4-451f-4e97-a9d2-b34402324d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037350767s Apr 28 23:56:34.832: INFO: Pod "projected-volume-cffda0f4-451f-4e97-a9d2-b34402324d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039838453s STEP: Saw pod success Apr 28 23:56:34.832: INFO: Pod "projected-volume-cffda0f4-451f-4e97-a9d2-b34402324d5e" satisfied condition "Succeeded or Failed" Apr 28 23:56:34.834: INFO: Trying to get logs from node latest-worker pod projected-volume-cffda0f4-451f-4e97-a9d2-b34402324d5e container projected-all-volume-test: STEP: delete the pod Apr 28 23:56:34.861: INFO: Waiting for pod projected-volume-cffda0f4-451f-4e97-a9d2-b34402324d5e to disappear Apr 28 23:56:34.877: INFO: Pod projected-volume-cffda0f4-451f-4e97-a9d2-b34402324d5e no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:56:34.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8876" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1310,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:56:34.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 23:56:35.429: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 23:56:37.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714995, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714995, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714995, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723714995, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 23:56:40.495: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:56:40.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1908" for this suite. STEP: Destroying namespace "webhook-1908-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.885 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":84,"skipped":1312,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:56:40.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 28 23:56:40.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 28 23:56:40.965: INFO: stderr: "" Apr 28 23:56:40.965: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:56:40.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6914" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":85,"skipped":1321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:56:40.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5610 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5610;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5610 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5610;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5610.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5610.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5610.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5610.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5610.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5610.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5610.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5610.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5610.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5610.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.108_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5610 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5610;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5610 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5610;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5610.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5610.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5610.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5610.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5610.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5610.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5610.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5610.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5610.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5610.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5610.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5610.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.207.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.207.108_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 23:56:48.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.107: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.112: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.115: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.118: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.121: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.124: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.152: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.155: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.157: INFO: Unable to read jessie_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.159: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.162: INFO: Unable to read jessie_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.165: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.167: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.169: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:48.185: INFO: Lookups using dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5610 wheezy_tcp@dns-test-service.dns-5610 wheezy_udp@dns-test-service.dns-5610.svc wheezy_tcp@dns-test-service.dns-5610.svc wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5610 jessie_tcp@dns-test-service.dns-5610 jessie_udp@dns-test-service.dns-5610.svc jessie_tcp@dns-test-service.dns-5610.svc jessie_udp@_http._tcp.dns-test-service.dns-5610.svc jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc] Apr 28 23:56:53.192: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.196: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.198: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.204: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.206: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.209: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.211: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.231: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.234: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.237: INFO: Unable to read jessie_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.240: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.243: INFO: Unable to read jessie_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.246: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.248: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.251: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:53.266: INFO: Lookups using dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5610 wheezy_tcp@dns-test-service.dns-5610 wheezy_udp@dns-test-service.dns-5610.svc wheezy_tcp@dns-test-service.dns-5610.svc wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5610 jessie_tcp@dns-test-service.dns-5610 jessie_udp@dns-test-service.dns-5610.svc jessie_tcp@dns-test-service.dns-5610.svc jessie_udp@_http._tcp.dns-test-service.dns-5610.svc jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc] Apr 28 23:56:58.191: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.195: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.198: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.204: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.207: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.209: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.212: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.234: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.237: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.240: INFO: Unable to read jessie_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.247: INFO: Unable to read jessie_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.250: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.253: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.256: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:56:58.276: INFO: Lookups using dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5610 wheezy_tcp@dns-test-service.dns-5610 wheezy_udp@dns-test-service.dns-5610.svc wheezy_tcp@dns-test-service.dns-5610.svc wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5610 jessie_tcp@dns-test-service.dns-5610 jessie_udp@dns-test-service.dns-5610.svc jessie_tcp@dns-test-service.dns-5610.svc jessie_udp@_http._tcp.dns-test-service.dns-5610.svc jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc] Apr 28 23:57:03.190: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.214: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.218: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.221: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.226: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.228: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.230: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.252: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.256: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.259: INFO: Unable to read jessie_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.262: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.264: INFO: Unable to read jessie_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.266: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.269: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.272: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:03.289: INFO: Lookups using dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5610 wheezy_tcp@dns-test-service.dns-5610 wheezy_udp@dns-test-service.dns-5610.svc wheezy_tcp@dns-test-service.dns-5610.svc wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5610 jessie_tcp@dns-test-service.dns-5610 jessie_udp@dns-test-service.dns-5610.svc jessie_tcp@dns-test-service.dns-5610.svc jessie_udp@_http._tcp.dns-test-service.dns-5610.svc jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc] Apr 28 23:57:08.190: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.194: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.197: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.204: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.207: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.210: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.212: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.235: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.239: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.242: INFO: Unable to read jessie_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.245: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.249: INFO: Unable to read jessie_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.252: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.255: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.259: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:08.278: INFO: Lookups using dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5610 wheezy_tcp@dns-test-service.dns-5610 wheezy_udp@dns-test-service.dns-5610.svc wheezy_tcp@dns-test-service.dns-5610.svc wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5610 jessie_tcp@dns-test-service.dns-5610 jessie_udp@dns-test-service.dns-5610.svc jessie_tcp@dns-test-service.dns-5610.svc jessie_udp@_http._tcp.dns-test-service.dns-5610.svc jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc] Apr 28 23:57:13.191: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.194: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.197: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.203: INFO: Unable to read wheezy_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.207: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.210: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.212: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.230: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.233: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.235: INFO: Unable to read jessie_udp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.237: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610 from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.240: INFO: Unable to read jessie_udp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.243: INFO: Unable to read jessie_tcp@dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.246: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.248: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc from pod dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742: the server could not find the requested resource (get pods dns-test-3310908a-1731-43d8-b8ff-5092bb67a742) Apr 28 23:57:13.271: INFO: Lookups using dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5610 wheezy_tcp@dns-test-service.dns-5610 wheezy_udp@dns-test-service.dns-5610.svc wheezy_tcp@dns-test-service.dns-5610.svc wheezy_udp@_http._tcp.dns-test-service.dns-5610.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5610.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5610 jessie_tcp@dns-test-service.dns-5610 jessie_udp@dns-test-service.dns-5610.svc jessie_tcp@dns-test-service.dns-5610.svc jessie_udp@_http._tcp.dns-test-service.dns-5610.svc jessie_tcp@_http._tcp.dns-test-service.dns-5610.svc] Apr 28 23:57:18.278: INFO: DNS probes using dns-5610/dns-test-3310908a-1731-43d8-b8ff-5092bb67a742 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:57:18.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5610" for this suite. • [SLOW TEST:37.822 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":86,"skipped":1348,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:57:18.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 28 23:57:19.610: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 28 23:57:21.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715039, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715039, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715039, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715039, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 28 23:57:24.707: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:57:24.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:57:25.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2056" for this suite. STEP: Destroying namespace "webhook-2056-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.155 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":87,"skipped":1369,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:57:25.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 28 23:57:30.569: INFO: Successfully updated pod "annotationupdate341a3d6e-5585-4e40-b478-2d2c085ea178" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:57:32.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6154" for this suite. • [SLOW TEST:6.624 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1377,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:57:32.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:57:32.659: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:57:41.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7454" for this suite. • [SLOW TEST:8.708 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":89,"skipped":1377,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:57:41.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 28 23:57:41.360: INFO: Waiting up to 5m0s for pod "client-containers-9fca1033-5f7c-4c9a-9ab5-5ed3cef5fdde" in namespace "containers-9503" to be "Succeeded or Failed" Apr 28 23:57:41.364: INFO: Pod "client-containers-9fca1033-5f7c-4c9a-9ab5-5ed3cef5fdde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005282ms Apr 28 23:57:43.367: INFO: Pod "client-containers-9fca1033-5f7c-4c9a-9ab5-5ed3cef5fdde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007148103s Apr 28 23:57:45.371: INFO: Pod "client-containers-9fca1033-5f7c-4c9a-9ab5-5ed3cef5fdde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010919857s STEP: Saw pod success Apr 28 23:57:45.371: INFO: Pod "client-containers-9fca1033-5f7c-4c9a-9ab5-5ed3cef5fdde" satisfied condition "Succeeded or Failed" Apr 28 23:57:45.374: INFO: Trying to get logs from node latest-worker pod client-containers-9fca1033-5f7c-4c9a-9ab5-5ed3cef5fdde container test-container: STEP: delete the pod Apr 28 23:57:45.399: INFO: Waiting for pod client-containers-9fca1033-5f7c-4c9a-9ab5-5ed3cef5fdde to disappear Apr 28 23:57:45.403: INFO: Pod client-containers-9fca1033-5f7c-4c9a-9ab5-5ed3cef5fdde no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:57:45.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9503" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1383,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:57:45.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 28 23:57:45.477: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:57:46.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-717" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":91,"skipped":1398,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:57:46.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 28 23:58:46.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6471" for this suite. • [SLOW TEST:60.256 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1419,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 28 23:58:46.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 28 23:58:47.098: INFO: Pod name wrapped-volume-race-805739b3-1c3b-470c-a303-fb81c5996980: Found 0 pods out of 5 Apr 28 23:58:52.207: INFO: Pod name wrapped-volume-race-805739b3-1c3b-470c-a303-fb81c5996980: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-805739b3-1c3b-470c-a303-fb81c5996980 in namespace emptydir-wrapper-174, will wait for the garbage collector to delete the pods Apr 28 23:59:06.801: INFO: Deleting ReplicationController wrapped-volume-race-805739b3-1c3b-470c-a303-fb81c5996980 took: 21.257904ms Apr 28 23:59:07.202: INFO: Terminating ReplicationController wrapped-volume-race-805739b3-1c3b-470c-a303-fb81c5996980 pods took: 400.267378ms STEP: Creating RC which spawns configmap-volume pods Apr 28 23:59:23.462: INFO: Pod name wrapped-volume-race-04f582ed-05a7-44b4-85cc-2a8c0bded0af: Found 0 pods out of 5 Apr 28 23:59:28.470: INFO: Pod name wrapped-volume-race-04f582ed-05a7-44b4-85cc-2a8c0bded0af: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-04f582ed-05a7-44b4-85cc-2a8c0bded0af in namespace emptydir-wrapper-174, will wait for the garbage collector to delete the pods Apr 28 23:59:42.595: INFO: Deleting ReplicationController wrapped-volume-race-04f582ed-05a7-44b4-85cc-2a8c0bded0af took: 7.148483ms Apr 28 23:59:42.996: INFO: Terminating ReplicationController wrapped-volume-race-04f582ed-05a7-44b4-85cc-2a8c0bded0af pods took: 400.275084ms STEP: Creating RC which spawns configmap-volume pods Apr 28 23:59:53.855: INFO: Pod name wrapped-volume-race-fd07df7a-6b59-420d-8e89-ea060387bbaa: Found 0 pods out of 5 Apr 28 23:59:58.861: INFO: Pod name wrapped-volume-race-fd07df7a-6b59-420d-8e89-ea060387bbaa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fd07df7a-6b59-420d-8e89-ea060387bbaa in namespace emptydir-wrapper-174, will wait for the garbage collector to delete the pods Apr 29 00:00:12.940: INFO: Deleting ReplicationController wrapped-volume-race-fd07df7a-6b59-420d-8e89-ea060387bbaa took: 7.477807ms Apr 29 00:00:13.340: INFO: Terminating ReplicationController wrapped-volume-race-fd07df7a-6b59-420d-8e89-ea060387bbaa pods took: 400.29702ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:00:24.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-174" for this suite. • [SLOW TEST:98.219 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":93,"skipped":1432,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:00:24.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 29 00:00:24.620: INFO: Waiting up to 5m0s for pod "pod-7a80bd86-d9af-4435-a41a-3de558fe17a3" in namespace "emptydir-6519" to be "Succeeded or Failed" Apr 29 00:00:24.626: INFO: Pod "pod-7a80bd86-d9af-4435-a41a-3de558fe17a3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.292478ms Apr 29 00:00:26.630: INFO: Pod "pod-7a80bd86-d9af-4435-a41a-3de558fe17a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009598874s Apr 29 00:00:28.634: INFO: Pod "pod-7a80bd86-d9af-4435-a41a-3de558fe17a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013660006s STEP: Saw pod success Apr 29 00:00:28.634: INFO: Pod "pod-7a80bd86-d9af-4435-a41a-3de558fe17a3" satisfied condition "Succeeded or Failed" Apr 29 00:00:28.637: INFO: Trying to get logs from node latest-worker2 pod pod-7a80bd86-d9af-4435-a41a-3de558fe17a3 container test-container: STEP: delete the pod Apr 29 00:00:28.674: INFO: Waiting for pod pod-7a80bd86-d9af-4435-a41a-3de558fe17a3 to disappear Apr 29 00:00:28.678: INFO: Pod pod-7a80bd86-d9af-4435-a41a-3de558fe17a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:00:28.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6519" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:00:28.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:00:29.446: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:00:31.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715229, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715229, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715229, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715229, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:00:33.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715229, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715229, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715229, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715229, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:00:36.549: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:00:36.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2532-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:00:37.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1992" for this suite. STEP: Destroying namespace "webhook-1992-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.111 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":95,"skipped":1460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:00:37.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 29 00:00:38.125: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 29 00:00:40.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715238, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715238, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715238, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715238, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:00:43.170: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:00:43.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:00:44.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6069" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.643 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":96,"skipped":1490,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:00:44.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-0b7012ea-f1a2-40fc-8286-1a144f811d49 STEP: Creating a pod to test consume secrets Apr 29 00:00:44.817: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c9419488-4c74-4c09-aaaa-1940d269a8d6" in namespace "projected-1270" to be "Succeeded or Failed" Apr 29 00:00:44.848: INFO: Pod "pod-projected-secrets-c9419488-4c74-4c09-aaaa-1940d269a8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.601737ms Apr 29 00:00:46.851: INFO: Pod "pod-projected-secrets-c9419488-4c74-4c09-aaaa-1940d269a8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034042471s Apr 29 00:00:48.855: INFO: Pod "pod-projected-secrets-c9419488-4c74-4c09-aaaa-1940d269a8d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037559232s STEP: Saw pod success Apr 29 00:00:48.855: INFO: Pod "pod-projected-secrets-c9419488-4c74-4c09-aaaa-1940d269a8d6" satisfied condition "Succeeded or Failed" Apr 29 00:00:48.857: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-c9419488-4c74-4c09-aaaa-1940d269a8d6 container projected-secret-volume-test: STEP: delete the pod Apr 29 00:00:48.884: INFO: Waiting for pod pod-projected-secrets-c9419488-4c74-4c09-aaaa-1940d269a8d6 to disappear Apr 29 00:00:48.889: INFO: Pod pod-projected-secrets-c9419488-4c74-4c09-aaaa-1940d269a8d6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:00:48.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1270" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1504,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:00:48.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:00:49.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6190" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":98,"skipped":1511,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:00:49.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 29 00:00:49.171: INFO: Waiting up to 5m0s for pod "var-expansion-f826d133-1df1-455d-819a-a35e7990855f" in namespace "var-expansion-1633" to be "Succeeded or Failed" Apr 29 00:00:49.177: INFO: Pod "var-expansion-f826d133-1df1-455d-819a-a35e7990855f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558454ms Apr 29 00:00:51.181: INFO: Pod "var-expansion-f826d133-1df1-455d-819a-a35e7990855f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010614822s Apr 29 00:00:53.185: INFO: Pod "var-expansion-f826d133-1df1-455d-819a-a35e7990855f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014773056s STEP: Saw pod success Apr 29 00:00:53.186: INFO: Pod "var-expansion-f826d133-1df1-455d-819a-a35e7990855f" satisfied condition "Succeeded or Failed" Apr 29 00:00:53.188: INFO: Trying to get logs from node latest-worker2 pod var-expansion-f826d133-1df1-455d-819a-a35e7990855f container dapi-container: STEP: delete the pod Apr 29 00:00:53.210: INFO: Waiting for pod var-expansion-f826d133-1df1-455d-819a-a35e7990855f to disappear Apr 29 00:00:53.214: INFO: Pod var-expansion-f826d133-1df1-455d-819a-a35e7990855f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:00:53.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1633" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1525,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:00:53.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 29 00:01:03.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 00:01:03.406: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 00:01:05.406: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 00:01:05.410: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 00:01:07.406: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 00:01:07.409: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 00:01:09.406: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 00:01:09.410: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 00:01:11.406: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 00:01:11.409: INFO: Pod pod-with-poststart-exec-hook still exists Apr 29 00:01:13.406: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 29 00:01:13.410: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:01:13.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9643" for this suite. • [SLOW TEST:20.208 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:01:13.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:01:17.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6252" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1592,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:01:17.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:01:17.795: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 29 00:01:17.801: INFO: Number of nodes with available pods: 0 Apr 29 00:01:17.801: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 29 00:01:17.905: INFO: Number of nodes with available pods: 0 Apr 29 00:01:17.905: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:18.910: INFO: Number of nodes with available pods: 0 Apr 29 00:01:18.910: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:19.909: INFO: Number of nodes with available pods: 0 Apr 29 00:01:19.909: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:20.909: INFO: Number of nodes with available pods: 0 Apr 29 00:01:20.909: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:21.910: INFO: Number of nodes with available pods: 1 Apr 29 00:01:21.910: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 29 00:01:21.951: INFO: Number of nodes with available pods: 1 Apr 29 00:01:21.951: INFO: Number of running nodes: 0, number of available pods: 1 Apr 29 00:01:22.955: INFO: Number of nodes with available pods: 0 Apr 29 00:01:22.955: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 29 00:01:22.970: INFO: Number of nodes with available pods: 0 Apr 29 00:01:22.970: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:23.995: INFO: Number of nodes with available pods: 0 Apr 29 00:01:23.995: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:24.974: INFO: Number of nodes with available pods: 0 Apr 29 00:01:24.974: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:25.974: INFO: Number of nodes with available pods: 0 Apr 29 00:01:25.974: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:26.974: INFO: Number of nodes with available pods: 0 Apr 29 00:01:26.974: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:27.974: INFO: Number of nodes with available pods: 0 Apr 29 00:01:27.974: INFO: Node latest-worker2 is running more than one daemon pod Apr 29 00:01:28.974: INFO: Number of nodes with available pods: 1 Apr 29 00:01:28.974: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1786, will wait for the garbage collector to delete the pods Apr 29 00:01:29.039: INFO: Deleting DaemonSet.extensions daemon-set took: 5.705458ms Apr 29 00:01:29.339: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.266901ms Apr 29 00:01:43.043: INFO: Number of nodes with available pods: 0 Apr 29 00:01:43.043: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 00:01:43.050: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1786/daemonsets","resourceVersion":"11844865"},"items":null} Apr 29 00:01:43.054: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1786/pods","resourceVersion":"11844865"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:01:43.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1786" for this suite. • [SLOW TEST:25.421 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":102,"skipped":1597,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:01:43.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 29 00:01:43.165: INFO: Waiting up to 5m0s for pod "pod-b4a30d2a-f99c-49f6-8d0b-235444e6b82a" in namespace "emptydir-255" to be "Succeeded or Failed" Apr 29 00:01:43.172: INFO: Pod "pod-b4a30d2a-f99c-49f6-8d0b-235444e6b82a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.911354ms Apr 29 00:01:45.176: INFO: Pod "pod-b4a30d2a-f99c-49f6-8d0b-235444e6b82a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011139463s Apr 29 00:01:47.180: INFO: Pod "pod-b4a30d2a-f99c-49f6-8d0b-235444e6b82a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015383666s STEP: Saw pod success Apr 29 00:01:47.180: INFO: Pod "pod-b4a30d2a-f99c-49f6-8d0b-235444e6b82a" satisfied condition "Succeeded or Failed" Apr 29 00:01:47.183: INFO: Trying to get logs from node latest-worker2 pod pod-b4a30d2a-f99c-49f6-8d0b-235444e6b82a container test-container: STEP: delete the pod Apr 29 00:01:47.206: INFO: Waiting for pod pod-b4a30d2a-f99c-49f6-8d0b-235444e6b82a to disappear Apr 29 00:01:47.210: INFO: Pod pod-b4a30d2a-f99c-49f6-8d0b-235444e6b82a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:01:47.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-255" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1603,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:01:47.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-lgcc STEP: Creating a pod to test atomic-volume-subpath Apr 29 00:01:47.338: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lgcc" in namespace "subpath-6177" to be "Succeeded or Failed" Apr 29 00:01:47.363: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Pending", Reason="", readiness=false. Elapsed: 25.148395ms Apr 29 00:01:49.367: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028725356s Apr 29 00:01:51.370: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 4.03235182s Apr 29 00:01:53.390: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 6.052081081s Apr 29 00:01:55.394: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 8.055898528s Apr 29 00:01:57.398: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 10.060155855s Apr 29 00:01:59.402: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 12.063594053s Apr 29 00:02:01.405: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 14.066918182s Apr 29 00:02:03.412: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 16.073634924s Apr 29 00:02:05.416: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 18.077576903s Apr 29 00:02:07.457: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 20.118810187s Apr 29 00:02:09.461: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Running", Reason="", readiness=true. Elapsed: 22.123342961s Apr 29 00:02:11.465: INFO: Pod "pod-subpath-test-configmap-lgcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.127400432s STEP: Saw pod success Apr 29 00:02:11.466: INFO: Pod "pod-subpath-test-configmap-lgcc" satisfied condition "Succeeded or Failed" Apr 29 00:02:11.468: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-lgcc container test-container-subpath-configmap-lgcc: STEP: delete the pod Apr 29 00:02:11.510: INFO: Waiting for pod pod-subpath-test-configmap-lgcc to disappear Apr 29 00:02:11.552: INFO: Pod pod-subpath-test-configmap-lgcc no longer exists STEP: Deleting pod pod-subpath-test-configmap-lgcc Apr 29 00:02:11.552: INFO: Deleting pod "pod-subpath-test-configmap-lgcc" in namespace "subpath-6177" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:02:11.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6177" for this suite. • [SLOW TEST:24.370 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":104,"skipped":1606,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:02:11.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:02:11.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7db93d2d-7f82-466b-8e5d-cd7230b822ee" in namespace "downward-api-2987" to be "Succeeded or Failed" Apr 29 00:02:11.718: INFO: Pod "downwardapi-volume-7db93d2d-7f82-466b-8e5d-cd7230b822ee": Phase="Pending", Reason="", readiness=false. Elapsed: 14.218883ms Apr 29 00:02:13.722: INFO: Pod "downwardapi-volume-7db93d2d-7f82-466b-8e5d-cd7230b822ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018328155s Apr 29 00:02:15.762: INFO: Pod "downwardapi-volume-7db93d2d-7f82-466b-8e5d-cd7230b822ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058358105s STEP: Saw pod success Apr 29 00:02:15.762: INFO: Pod "downwardapi-volume-7db93d2d-7f82-466b-8e5d-cd7230b822ee" satisfied condition "Succeeded or Failed" Apr 29 00:02:15.764: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7db93d2d-7f82-466b-8e5d-cd7230b822ee container client-container: STEP: delete the pod Apr 29 00:02:15.803: INFO: Waiting for pod downwardapi-volume-7db93d2d-7f82-466b-8e5d-cd7230b822ee to disappear Apr 29 00:02:15.838: INFO: Pod downwardapi-volume-7db93d2d-7f82-466b-8e5d-cd7230b822ee no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:02:15.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2987" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1607,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:02:15.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 29 00:02:16.229: INFO: Waiting up to 5m0s for pod "var-expansion-48294a50-f023-49f1-b37b-8be090cba3cb" in namespace "var-expansion-1764" to be "Succeeded or Failed" Apr 29 00:02:16.233: INFO: Pod "var-expansion-48294a50-f023-49f1-b37b-8be090cba3cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120487ms Apr 29 00:02:18.238: INFO: Pod "var-expansion-48294a50-f023-49f1-b37b-8be090cba3cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00868834s Apr 29 00:02:20.242: INFO: Pod "var-expansion-48294a50-f023-49f1-b37b-8be090cba3cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013088601s STEP: Saw pod success Apr 29 00:02:20.242: INFO: Pod "var-expansion-48294a50-f023-49f1-b37b-8be090cba3cb" satisfied condition "Succeeded or Failed" Apr 29 00:02:20.246: INFO: Trying to get logs from node latest-worker pod var-expansion-48294a50-f023-49f1-b37b-8be090cba3cb container dapi-container: STEP: delete the pod Apr 29 00:02:20.297: INFO: Waiting for pod var-expansion-48294a50-f023-49f1-b37b-8be090cba3cb to disappear Apr 29 00:02:20.319: INFO: Pod var-expansion-48294a50-f023-49f1-b37b-8be090cba3cb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:02:20.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1764" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1613,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:02:20.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-989/secret-test-3fb104a0-b7fd-40ff-b7d2-9e414ef6b126 STEP: Creating a pod to test consume secrets Apr 29 00:02:20.387: INFO: Waiting up to 5m0s for pod "pod-configmaps-445532c8-ff12-4131-b021-2415c60fd5f0" in namespace "secrets-989" to be "Succeeded or Failed" Apr 29 00:02:20.390: INFO: Pod "pod-configmaps-445532c8-ff12-4131-b021-2415c60fd5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.44154ms Apr 29 00:02:22.395: INFO: Pod "pod-configmaps-445532c8-ff12-4131-b021-2415c60fd5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00764876s Apr 29 00:02:24.399: INFO: Pod "pod-configmaps-445532c8-ff12-4131-b021-2415c60fd5f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012039343s STEP: Saw pod success Apr 29 00:02:24.399: INFO: Pod "pod-configmaps-445532c8-ff12-4131-b021-2415c60fd5f0" satisfied condition "Succeeded or Failed" Apr 29 00:02:24.402: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-445532c8-ff12-4131-b021-2415c60fd5f0 container env-test: STEP: delete the pod Apr 29 00:02:24.438: INFO: Waiting for pod pod-configmaps-445532c8-ff12-4131-b021-2415c60fd5f0 to disappear Apr 29 00:02:24.450: INFO: Pod pod-configmaps-445532c8-ff12-4131-b021-2415c60fd5f0 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:02:24.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-989" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:02:24.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 29 00:02:28.563: INFO: &Pod{ObjectMeta:{send-events-5893c9bb-81b0-4afe-a7fd-37e1c2069a36 events-7003 /api/v1/namespaces/events-7003/pods/send-events-5893c9bb-81b0-4afe-a7fd-37e1c2069a36 9a901778-bd27-4a52-8f13-c7bcbdb89bcc 11845148 0 2020-04-29 00:02:24 +0000 UTC map[name:foo time:529990048] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8gtd9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8gtd9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8gtd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:02:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.79,StartTime:2020-04-29 00:02:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 00:02:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://0722a9784c7eb7bb3a548235126b6f3df82fa8937360a2b6babcc1e7b5d896f6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 29 00:02:30.568: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 29 00:02:32.572: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:02:32.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7003" for this suite. • [SLOW TEST:8.149 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":108,"skipped":1662,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:02:32.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-48834f95-978c-4aae-b861-1941ef2cc961 in namespace container-probe-1090 Apr 29 00:02:36.681: INFO: Started pod busybox-48834f95-978c-4aae-b861-1941ef2cc961 in namespace container-probe-1090 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 00:02:36.684: INFO: Initial restart count of pod busybox-48834f95-978c-4aae-b861-1941ef2cc961 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:06:38.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1090" for this suite. • [SLOW TEST:246.190 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:06:38.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 29 00:06:38.893: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 29 00:06:38.915: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 29 00:06:38.915: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 29 00:06:38.921: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 29 00:06:38.921: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 29 00:06:38.958: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 29 00:06:38.958: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 29 00:06:46.097: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:06:46.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1554" for this suite. • [SLOW TEST:7.356 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":110,"skipped":1740,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:06:46.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 29 00:06:46.236: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 00:06:46.257: INFO: Waiting for terminating namespaces to be deleted... Apr 29 00:06:46.259: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 29 00:06:46.278: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:06:46.278: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 00:06:46.278: INFO: pod-no-resources from limitrange-1554 started at 2020-04-29 00:06:38 +0000 UTC (1 container statuses recorded) Apr 29 00:06:46.278: INFO: Container pause ready: true, restart count 0 Apr 29 00:06:46.278: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:06:46.278: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 00:06:46.278: INFO: pfpod2 from limitrange-1554 started at 2020-04-29 00:06:46 +0000 UTC (1 container statuses recorded) Apr 29 00:06:46.278: INFO: Container pause ready: false, restart count 0 Apr 29 00:06:46.278: INFO: pfpod from limitrange-1554 started at 2020-04-29 00:06:41 +0000 UTC (1 container statuses recorded) Apr 29 00:06:46.278: INFO: Container pause ready: true, restart count 0 Apr 29 00:06:46.278: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 29 00:06:46.294: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:06:46.294: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 00:06:46.294: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:06:46.294: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 00:06:46.294: INFO: pod-partial-resources from limitrange-1554 started at 2020-04-29 00:06:39 +0000 UTC (1 container statuses recorded) Apr 29 00:06:46.294: INFO: Container pause ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-df530ee5-25ff-42b6-93a7-11203f53f778 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-df530ee5-25ff-42b6-93a7-11203f53f778 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-df530ee5-25ff-42b6-93a7-11203f53f778 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:07:04.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9882" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:18.406 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":111,"skipped":1758,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:07:04.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7052.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7052.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 38.191.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.191.38_udp@PTR;check="$$(dig +tcp +noall +answer +search 38.191.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.191.38_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7052.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7052.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 38.191.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.191.38_udp@PTR;check="$$(dig +tcp +noall +answer +search 38.191.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.191.38_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 00:07:10.737: INFO: Unable to read wheezy_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:10.740: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:10.742: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:10.745: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:10.766: INFO: Unable to read jessie_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:10.768: INFO: Unable to read jessie_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:10.771: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:10.773: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:10.800: INFO: Lookups using dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0 failed for: [wheezy_udp@dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_udp@dns-test-service.dns-7052.svc.cluster.local jessie_tcp@dns-test-service.dns-7052.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local] Apr 29 00:07:15.805: INFO: Unable to read wheezy_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:15.809: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:15.812: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:15.815: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:15.836: INFO: Unable to read jessie_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:15.839: INFO: Unable to read jessie_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:15.842: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:15.845: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:15.861: INFO: Lookups using dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0 failed for: [wheezy_udp@dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_udp@dns-test-service.dns-7052.svc.cluster.local jessie_tcp@dns-test-service.dns-7052.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local] Apr 29 00:07:20.806: INFO: Unable to read wheezy_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:20.810: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:20.814: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:20.817: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:20.835: INFO: Unable to read jessie_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:20.838: INFO: Unable to read jessie_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:20.841: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:20.843: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:20.858: INFO: Lookups using dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0 failed for: [wheezy_udp@dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_udp@dns-test-service.dns-7052.svc.cluster.local jessie_tcp@dns-test-service.dns-7052.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local] Apr 29 00:07:25.805: INFO: Unable to read wheezy_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:25.809: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:25.812: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:25.815: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:25.833: INFO: Unable to read jessie_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:25.835: INFO: Unable to read jessie_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:25.839: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:25.842: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:25.861: INFO: Lookups using dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0 failed for: [wheezy_udp@dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_udp@dns-test-service.dns-7052.svc.cluster.local jessie_tcp@dns-test-service.dns-7052.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local] Apr 29 00:07:30.805: INFO: Unable to read wheezy_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:30.809: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:30.812: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:30.816: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:30.837: INFO: Unable to read jessie_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:30.840: INFO: Unable to read jessie_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:30.843: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:30.845: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:30.860: INFO: Lookups using dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0 failed for: [wheezy_udp@dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_udp@dns-test-service.dns-7052.svc.cluster.local jessie_tcp@dns-test-service.dns-7052.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local] Apr 29 00:07:35.804: INFO: Unable to read wheezy_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:35.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:35.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:35.845: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:35.865: INFO: Unable to read jessie_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:35.868: INFO: Unable to read jessie_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:35.870: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:35.872: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0: the server could not find the requested resource (get pods dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0) Apr 29 00:07:35.890: INFO: Lookups using dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0 failed for: [wheezy_udp@dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_udp@dns-test-service.dns-7052.svc.cluster.local jessie_tcp@dns-test-service.dns-7052.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local] Apr 29 00:07:40.869: INFO: DNS probes using dns-7052/dns-test-8f94b10a-77f2-4622-a7bd-8b80407cb9a0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:07:41.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7052" for this suite. • [SLOW TEST:37.213 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":112,"skipped":1767,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:07:41.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-47ded047-a25f-4c3b-af00-6c685ad90236 STEP: Creating a pod to test consume secrets Apr 29 00:07:41.859: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-941497f4-7ba1-483b-8052-f7576d7dee6f" in namespace "projected-6976" to be "Succeeded or Failed" Apr 29 00:07:41.863: INFO: Pod "pod-projected-secrets-941497f4-7ba1-483b-8052-f7576d7dee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.753386ms Apr 29 00:07:44.299: INFO: Pod "pod-projected-secrets-941497f4-7ba1-483b-8052-f7576d7dee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439933096s Apr 29 00:07:46.303: INFO: Pod "pod-projected-secrets-941497f4-7ba1-483b-8052-f7576d7dee6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.443704146s STEP: Saw pod success Apr 29 00:07:46.303: INFO: Pod "pod-projected-secrets-941497f4-7ba1-483b-8052-f7576d7dee6f" satisfied condition "Succeeded or Failed" Apr 29 00:07:46.306: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-941497f4-7ba1-483b-8052-f7576d7dee6f container projected-secret-volume-test: STEP: delete the pod Apr 29 00:07:46.407: INFO: Waiting for pod pod-projected-secrets-941497f4-7ba1-483b-8052-f7576d7dee6f to disappear Apr 29 00:07:46.414: INFO: Pod pod-projected-secrets-941497f4-7ba1-483b-8052-f7576d7dee6f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:07:46.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6976" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1770,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:07:46.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:07:47.139: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:07:49.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715667, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715667, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:07:52.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:08:04.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1336" for this suite. STEP: Destroying namespace "webhook-1336-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.120 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":114,"skipped":1778,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:08:04.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:08:04.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05f4de09-6574-405b-bcd6-5fb41d130d5a" in namespace "projected-3097" to be "Succeeded or Failed" Apr 29 00:08:04.631: INFO: Pod "downwardapi-volume-05f4de09-6574-405b-bcd6-5fb41d130d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.873284ms Apr 29 00:08:06.679: INFO: Pod "downwardapi-volume-05f4de09-6574-405b-bcd6-5fb41d130d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059264468s Apr 29 00:08:08.684: INFO: Pod "downwardapi-volume-05f4de09-6574-405b-bcd6-5fb41d130d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063564204s STEP: Saw pod success Apr 29 00:08:08.684: INFO: Pod "downwardapi-volume-05f4de09-6574-405b-bcd6-5fb41d130d5a" satisfied condition "Succeeded or Failed" Apr 29 00:08:08.687: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-05f4de09-6574-405b-bcd6-5fb41d130d5a container client-container: STEP: delete the pod Apr 29 00:08:08.710: INFO: Waiting for pod downwardapi-volume-05f4de09-6574-405b-bcd6-5fb41d130d5a to disappear Apr 29 00:08:08.715: INFO: Pod downwardapi-volume-05f4de09-6574-405b-bcd6-5fb41d130d5a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:08:08.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3097" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1787,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:08:08.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-1c92e2d7-121e-4893-a4c5-253f573e252a STEP: Creating a pod to test consume secrets Apr 29 00:08:08.831: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-96d703ff-0040-44a3-a204-3cfafe076132" in namespace "projected-2811" to be "Succeeded or Failed" Apr 29 00:08:08.846: INFO: Pod "pod-projected-secrets-96d703ff-0040-44a3-a204-3cfafe076132": Phase="Pending", Reason="", readiness=false. Elapsed: 15.315076ms Apr 29 00:08:10.851: INFO: Pod "pod-projected-secrets-96d703ff-0040-44a3-a204-3cfafe076132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019962434s Apr 29 00:08:12.854: INFO: Pod "pod-projected-secrets-96d703ff-0040-44a3-a204-3cfafe076132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023735446s STEP: Saw pod success Apr 29 00:08:12.854: INFO: Pod "pod-projected-secrets-96d703ff-0040-44a3-a204-3cfafe076132" satisfied condition "Succeeded or Failed" Apr 29 00:08:12.857: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-96d703ff-0040-44a3-a204-3cfafe076132 container projected-secret-volume-test: STEP: delete the pod Apr 29 00:08:12.912: INFO: Waiting for pod pod-projected-secrets-96d703ff-0040-44a3-a204-3cfafe076132 to disappear Apr 29 00:08:12.923: INFO: Pod pod-projected-secrets-96d703ff-0040-44a3-a204-3cfafe076132 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:08:12.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2811" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:08:12.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:08:13.033: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5515b444-fc61-41a9-bf6c-80d992e2702b" in namespace "downward-api-9765" to be "Succeeded or Failed" Apr 29 00:08:13.045: INFO: Pod "downwardapi-volume-5515b444-fc61-41a9-bf6c-80d992e2702b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.181145ms Apr 29 00:08:15.049: INFO: Pod "downwardapi-volume-5515b444-fc61-41a9-bf6c-80d992e2702b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015620088s Apr 29 00:08:17.053: INFO: Pod "downwardapi-volume-5515b444-fc61-41a9-bf6c-80d992e2702b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020052021s STEP: Saw pod success Apr 29 00:08:17.053: INFO: Pod "downwardapi-volume-5515b444-fc61-41a9-bf6c-80d992e2702b" satisfied condition "Succeeded or Failed" Apr 29 00:08:17.056: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5515b444-fc61-41a9-bf6c-80d992e2702b container client-container: STEP: delete the pod Apr 29 00:08:17.100: INFO: Waiting for pod downwardapi-volume-5515b444-fc61-41a9-bf6c-80d992e2702b to disappear Apr 29 00:08:17.110: INFO: Pod downwardapi-volume-5515b444-fc61-41a9-bf6c-80d992e2702b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:08:17.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9765" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1839,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:08:17.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 29 00:08:21.759: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-431 pod-service-account-a551a551-d550-4f27-9e05-3846a51e5375 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 29 00:08:25.866: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-431 pod-service-account-a551a551-d550-4f27-9e05-3846a51e5375 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 29 00:08:26.080: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-431 pod-service-account-a551a551-d550-4f27-9e05-3846a51e5375 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:08:26.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-431" for this suite. • [SLOW TEST:9.165 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":118,"skipped":1840,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:08:26.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-0a0c0b1c-9c92-4999-bcf8-c391221af322 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-0a0c0b1c-9c92-4999-bcf8-c391221af322 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:08:32.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5730" for this suite. • [SLOW TEST:6.174 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":1846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:08:32.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5595.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5595.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5595.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5595.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5595.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5595.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 00:08:38.610: INFO: DNS probes using dns-5595/dns-test-f606ab2d-a436-44d0-99b3-9ca68ae8522a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:08:38.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5595" for this suite. • [SLOW TEST:6.250 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":120,"skipped":1893,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:08:38.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:08:38.931: INFO: Creating ReplicaSet my-hostname-basic-2bd9c1ad-13fd-498a-9715-913f25c45d5a Apr 29 00:08:39.003: INFO: Pod name my-hostname-basic-2bd9c1ad-13fd-498a-9715-913f25c45d5a: Found 0 pods out of 1 Apr 29 00:08:44.052: INFO: Pod name my-hostname-basic-2bd9c1ad-13fd-498a-9715-913f25c45d5a: Found 1 pods out of 1 Apr 29 00:08:44.052: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2bd9c1ad-13fd-498a-9715-913f25c45d5a" is running Apr 29 00:08:44.170: INFO: Pod "my-hostname-basic-2bd9c1ad-13fd-498a-9715-913f25c45d5a-8tg4z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 00:08:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 00:08:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 00:08:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 00:08:39 +0000 UTC Reason: Message:}]) Apr 29 00:08:44.170: INFO: Trying to dial the pod Apr 29 00:08:49.183: INFO: Controller my-hostname-basic-2bd9c1ad-13fd-498a-9715-913f25c45d5a: Got expected result from replica 1 [my-hostname-basic-2bd9c1ad-13fd-498a-9715-913f25c45d5a-8tg4z]: "my-hostname-basic-2bd9c1ad-13fd-498a-9715-913f25c45d5a-8tg4z", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:08:49.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1927" for this suite. • [SLOW TEST:10.482 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":121,"skipped":1899,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:08:49.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-fjt6 STEP: Creating a pod to test atomic-volume-subpath Apr 29 00:08:49.362: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fjt6" in namespace "subpath-7587" to be "Succeeded or Failed" Apr 29 00:08:49.374: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.255511ms Apr 29 00:08:51.378: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016158481s Apr 29 00:08:53.383: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 4.020492104s Apr 29 00:08:55.387: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 6.02474952s Apr 29 00:08:57.391: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 8.028584395s Apr 29 00:08:59.395: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 10.032500894s Apr 29 00:09:01.399: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 12.037088706s Apr 29 00:09:03.403: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 14.041251443s Apr 29 00:09:05.408: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 16.045867071s Apr 29 00:09:07.412: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 18.049988105s Apr 29 00:09:09.417: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 20.054706781s Apr 29 00:09:11.421: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Running", Reason="", readiness=true. Elapsed: 22.058650664s Apr 29 00:09:13.425: INFO: Pod "pod-subpath-test-downwardapi-fjt6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062921102s STEP: Saw pod success Apr 29 00:09:13.425: INFO: Pod "pod-subpath-test-downwardapi-fjt6" satisfied condition "Succeeded or Failed" Apr 29 00:09:13.428: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-fjt6 container test-container-subpath-downwardapi-fjt6: STEP: delete the pod Apr 29 00:09:13.518: INFO: Waiting for pod pod-subpath-test-downwardapi-fjt6 to disappear Apr 29 00:09:13.571: INFO: Pod pod-subpath-test-downwardapi-fjt6 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-fjt6 Apr 29 00:09:13.571: INFO: Deleting pod "pod-subpath-test-downwardapi-fjt6" in namespace "subpath-7587" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:09:13.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7587" for this suite. • [SLOW TEST:24.390 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":122,"skipped":1899,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:09:13.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4272 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 29 00:09:13.669: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 29 00:09:13.722: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 00:09:15.726: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 00:09:17.726: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:09:19.726: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:09:21.726: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:09:23.726: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:09:25.726: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:09:27.726: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 29 00:09:27.732: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 29 00:09:29.736: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 29 00:09:33.848: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.79 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4272 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 00:09:33.848: INFO: >>> kubeConfig: /root/.kube/config I0429 00:09:33.874972 7 log.go:172] (0xc002b48790) (0xc001a8da40) Create stream I0429 00:09:33.874999 7 log.go:172] (0xc002b48790) (0xc001a8da40) Stream added, broadcasting: 1 I0429 00:09:33.876757 7 log.go:172] (0xc002b48790) Reply frame received for 1 I0429 00:09:33.876799 7 log.go:172] (0xc002b48790) (0xc001d11a40) Create stream I0429 00:09:33.876815 7 log.go:172] (0xc002b48790) (0xc001d11a40) Stream added, broadcasting: 3 I0429 00:09:33.877985 7 log.go:172] (0xc002b48790) Reply frame received for 3 I0429 00:09:33.878022 7 log.go:172] (0xc002b48790) (0xc001d11c20) Create stream I0429 00:09:33.878035 7 log.go:172] (0xc002b48790) (0xc001d11c20) Stream added, broadcasting: 5 I0429 00:09:33.878882 7 log.go:172] (0xc002b48790) Reply frame received for 5 I0429 00:09:34.948960 7 log.go:172] (0xc002b48790) Data frame received for 5 I0429 00:09:34.949007 7 log.go:172] (0xc002b48790) Data frame received for 3 I0429 00:09:34.949054 7 log.go:172] (0xc001d11a40) (3) Data frame handling I0429 00:09:34.949091 7 log.go:172] (0xc001d11a40) (3) Data frame sent I0429 00:09:34.949292 7 log.go:172] (0xc001d11c20) (5) Data frame handling I0429 00:09:34.949520 7 log.go:172] (0xc002b48790) Data frame received for 3 I0429 00:09:34.949550 7 log.go:172] (0xc001d11a40) (3) Data frame handling I0429 00:09:34.951646 7 log.go:172] (0xc002b48790) Data frame received for 1 I0429 00:09:34.951714 7 log.go:172] (0xc001a8da40) (1) Data frame handling I0429 00:09:34.951773 7 log.go:172] (0xc001a8da40) (1) Data frame sent I0429 00:09:34.951823 7 log.go:172] (0xc002b48790) (0xc001a8da40) Stream removed, broadcasting: 1 I0429 00:09:34.951866 7 log.go:172] (0xc002b48790) Go away received I0429 00:09:34.952061 7 log.go:172] (0xc002b48790) (0xc001a8da40) Stream removed, broadcasting: 1 I0429 00:09:34.952085 7 log.go:172] (0xc002b48790) (0xc001d11a40) Stream removed, broadcasting: 3 I0429 00:09:34.952098 7 log.go:172] (0xc002b48790) (0xc001d11c20) Stream removed, broadcasting: 5 Apr 29 00:09:34.952: INFO: Found all expected endpoints: [netserver-0] Apr 29 00:09:34.955: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.89 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4272 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 00:09:34.955: INFO: >>> kubeConfig: /root/.kube/config I0429 00:09:34.983609 7 log.go:172] (0xc002834d10) (0xc001552320) Create stream I0429 00:09:34.983637 7 log.go:172] (0xc002834d10) (0xc001552320) Stream added, broadcasting: 1 I0429 00:09:34.985067 7 log.go:172] (0xc002834d10) Reply frame received for 1 I0429 00:09:34.985105 7 log.go:172] (0xc002834d10) (0xc001d11ea0) Create stream I0429 00:09:34.985224 7 log.go:172] (0xc002834d10) (0xc001d11ea0) Stream added, broadcasting: 3 I0429 00:09:34.986006 7 log.go:172] (0xc002834d10) Reply frame received for 3 I0429 00:09:34.986029 7 log.go:172] (0xc002834d10) (0xc001d11f40) Create stream I0429 00:09:34.986037 7 log.go:172] (0xc002834d10) (0xc001d11f40) Stream added, broadcasting: 5 I0429 00:09:34.986613 7 log.go:172] (0xc002834d10) Reply frame received for 5 I0429 00:09:36.031300 7 log.go:172] (0xc002834d10) Data frame received for 3 I0429 00:09:36.031352 7 log.go:172] (0xc001d11ea0) (3) Data frame handling I0429 00:09:36.031398 7 log.go:172] (0xc001d11ea0) (3) Data frame sent I0429 00:09:36.031906 7 log.go:172] (0xc002834d10) Data frame received for 3 I0429 00:09:36.031939 7 log.go:172] (0xc001d11ea0) (3) Data frame handling I0429 00:09:36.032023 7 log.go:172] (0xc002834d10) Data frame received for 5 I0429 00:09:36.032063 7 log.go:172] (0xc001d11f40) (5) Data frame handling I0429 00:09:36.035059 7 log.go:172] (0xc002834d10) Data frame received for 1 I0429 00:09:36.035087 7 log.go:172] (0xc001552320) (1) Data frame handling I0429 00:09:36.035112 7 log.go:172] (0xc001552320) (1) Data frame sent I0429 00:09:36.035131 7 log.go:172] (0xc002834d10) (0xc001552320) Stream removed, broadcasting: 1 I0429 00:09:36.035245 7 log.go:172] (0xc002834d10) (0xc001552320) Stream removed, broadcasting: 1 I0429 00:09:36.035271 7 log.go:172] (0xc002834d10) (0xc001d11ea0) Stream removed, broadcasting: 3 I0429 00:09:36.035290 7 log.go:172] (0xc002834d10) (0xc001d11f40) Stream removed, broadcasting: 5 Apr 29 00:09:36.035: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:09:36.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0429 00:09:36.035407 7 log.go:172] (0xc002834d10) Go away received STEP: Destroying namespace "pod-network-test-4272" for this suite. • [SLOW TEST:22.463 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":1900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:09:36.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:09:36.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5500" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":1975,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:09:36.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:09:43.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4589" for this suite. • [SLOW TEST:7.091 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":125,"skipped":2017,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:09:43.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:09:44.166: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:09:46.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715784, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715784, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715784, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715784, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:09:49.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 29 00:09:49.224: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:09:49.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1111" for this suite. STEP: Destroying namespace "webhook-1111-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.095 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":126,"skipped":2021,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:09:49.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 29 00:09:49.536: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6192" to be "Succeeded or Failed" Apr 29 00:09:49.545: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.414515ms Apr 29 00:09:51.559: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022490572s Apr 29 00:09:53.563: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026479961s Apr 29 00:09:55.567: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03058046s STEP: Saw pod success Apr 29 00:09:55.567: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 29 00:09:55.570: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 29 00:09:55.662: INFO: Waiting for pod pod-host-path-test to disappear Apr 29 00:09:55.669: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:09:55.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6192" for this suite. • [SLOW TEST:6.265 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2072,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:09:55.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 29 00:09:55.755: INFO: Waiting up to 5m0s for pod "pod-641dfc60-f673-42ad-b445-ca6815b54065" in namespace "emptydir-8993" to be "Succeeded or Failed" Apr 29 00:09:55.758: INFO: Pod "pod-641dfc60-f673-42ad-b445-ca6815b54065": Phase="Pending", Reason="", readiness=false. Elapsed: 3.425746ms Apr 29 00:09:57.762: INFO: Pod "pod-641dfc60-f673-42ad-b445-ca6815b54065": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007444885s Apr 29 00:09:59.767: INFO: Pod "pod-641dfc60-f673-42ad-b445-ca6815b54065": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011583318s STEP: Saw pod success Apr 29 00:09:59.767: INFO: Pod "pod-641dfc60-f673-42ad-b445-ca6815b54065" satisfied condition "Succeeded or Failed" Apr 29 00:09:59.769: INFO: Trying to get logs from node latest-worker2 pod pod-641dfc60-f673-42ad-b445-ca6815b54065 container test-container: STEP: delete the pod Apr 29 00:09:59.806: INFO: Waiting for pod pod-641dfc60-f673-42ad-b445-ca6815b54065 to disappear Apr 29 00:09:59.846: INFO: Pod pod-641dfc60-f673-42ad-b445-ca6815b54065 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:09:59.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8993" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:09:59.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:09:59.906: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 29 00:09:59.926: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 29 00:10:04.942: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 29 00:10:04.942: INFO: Creating deployment "test-rolling-update-deployment" Apr 29 00:10:04.952: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 29 00:10:04.964: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 29 00:10:06.971: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 29 00:10:06.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715805, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715805, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715805, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715804, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:10:08.977: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 29 00:10:08.986: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-918 /apis/apps/v1/namespaces/deployment-918/deployments/test-rolling-update-deployment 61726248-4820-46cd-8349-dee777273db2 11847336 1 2020-04-29 00:10:04 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004710088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-29 00:10:05 +0000 UTC,LastTransitionTime:2020-04-29 00:10:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-29 00:10:08 +0000 UTC,LastTransitionTime:2020-04-29 00:10:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 29 00:10:08.989: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-918 /apis/apps/v1/namespaces/deployment-918/replicasets/test-rolling-update-deployment-664dd8fc7f 3626bedc-2bd8-41ca-92a1-b73caca064dc 11847324 1 2020-04-29 00:10:04 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 61726248-4820-46cd-8349-dee777273db2 0xc0046136b7 0xc0046136b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004613728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 29 00:10:08.989: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 29 00:10:08.989: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-918 /apis/apps/v1/namespaces/deployment-918/replicasets/test-rolling-update-controller 9f901e96-2047-431e-b29f-89e09ed37ddf 11847334 2 2020-04-29 00:09:59 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 61726248-4820-46cd-8349-dee777273db2 0xc0046135e7 0xc0046135e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004613648 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 00:10:08.992: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-7d2rj" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-7d2rj test-rolling-update-deployment-664dd8fc7f- deployment-918 /api/v1/namespaces/deployment-918/pods/test-rolling-update-deployment-664dd8fc7f-7d2rj bfadbf53-e3f5-4419-b99d-ff032007108c 11847323 0 2020-04-29 00:10:04 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 3626bedc-2bd8-41ca-92a1-b73caca064dc 0xc004710507 0xc004710508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9rms8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9rms8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9rms8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:10:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:10:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.94,StartTime:2020-04-29 00:10:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 00:10:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://68194cd1b485b00f1a0fcb883aadb961c4f041d080f3d473b1037e1367fd4866,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:10:08.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-918" for this suite. • [SLOW TEST:9.148 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":129,"skipped":2116,"failed":0} SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:10:09.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-8814 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8814 to expose endpoints map[] Apr 29 00:10:09.247: INFO: Get endpoints failed (41.956822ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 29 00:10:10.251: INFO: successfully validated that service multi-endpoint-test in namespace services-8814 exposes endpoints map[] (1.045841019s elapsed) STEP: Creating pod pod1 in namespace services-8814 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8814 to expose endpoints map[pod1:[100]] Apr 29 00:10:13.293: INFO: successfully validated that service multi-endpoint-test in namespace services-8814 exposes endpoints map[pod1:[100]] (3.034530893s elapsed) STEP: Creating pod pod2 in namespace services-8814 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8814 to expose endpoints map[pod1:[100] pod2:[101]] Apr 29 00:10:16.492: INFO: successfully validated that service multi-endpoint-test in namespace services-8814 exposes endpoints map[pod1:[100] pod2:[101]] (3.195635515s elapsed) STEP: Deleting pod pod1 in namespace services-8814 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8814 to expose endpoints map[pod2:[101]] Apr 29 00:10:17.551: INFO: successfully validated that service multi-endpoint-test in namespace services-8814 exposes endpoints map[pod2:[101]] (1.054159071s elapsed) STEP: Deleting pod pod2 in namespace services-8814 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8814 to expose endpoints map[] Apr 29 00:10:18.586: INFO: successfully validated that service multi-endpoint-test in namespace services-8814 exposes endpoints map[] (1.030015872s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:10:18.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8814" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.664 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":130,"skipped":2119,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:10:18.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:10:18.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4680" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":131,"skipped":2122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:10:18.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:10:18.828: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 29 00:10:20.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1505 create -f -' Apr 29 00:10:23.877: INFO: stderr: "" Apr 29 00:10:23.877: INFO: stdout: "e2e-test-crd-publish-openapi-5422-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 29 00:10:23.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1505 delete e2e-test-crd-publish-openapi-5422-crds test-cr' Apr 29 00:10:23.986: INFO: stderr: "" Apr 29 00:10:23.986: INFO: stdout: "e2e-test-crd-publish-openapi-5422-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 29 00:10:23.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1505 apply -f -' Apr 29 00:10:24.351: INFO: stderr: "" Apr 29 00:10:24.351: INFO: stdout: "e2e-test-crd-publish-openapi-5422-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 29 00:10:24.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1505 delete e2e-test-crd-publish-openapi-5422-crds test-cr' Apr 29 00:10:24.478: INFO: stderr: "" Apr 29 00:10:24.478: INFO: stdout: "e2e-test-crd-publish-openapi-5422-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 29 00:10:24.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5422-crds' Apr 29 00:10:24.710: INFO: stderr: "" Apr 29 00:10:24.710: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5422-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:10:27.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1505" for this suite. • [SLOW TEST:8.867 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":132,"skipped":2145,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:10:27.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 29 00:10:27.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-3423 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 29 00:10:27.765: INFO: stderr: "" Apr 29 00:10:27.765: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 29 00:10:27.765: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 29 00:10:27.765: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3423" to be "running and ready, or succeeded" Apr 29 00:10:27.772: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.709316ms Apr 29 00:10:29.776: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010240894s Apr 29 00:10:31.780: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.014581213s Apr 29 00:10:31.780: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 29 00:10:31.780: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 29 00:10:31.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3423' Apr 29 00:10:31.902: INFO: stderr: "" Apr 29 00:10:31.902: INFO: stdout: "I0429 00:10:30.023583 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/x2rr 209\nI0429 00:10:30.223848 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/v44 434\nI0429 00:10:30.423751 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/kj4 302\nI0429 00:10:30.623726 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/vzhx 293\nI0429 00:10:30.823752 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/5xz2 276\nI0429 00:10:31.023733 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/rv8 504\nI0429 00:10:31.223767 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/gqxq 338\nI0429 00:10:31.423736 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/gwmh 349\nI0429 00:10:31.623756 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/h8hh 547\nI0429 00:10:31.823738 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/msr 278\n" STEP: limiting log lines Apr 29 00:10:31.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3423 --tail=1' Apr 29 00:10:32.034: INFO: stderr: "" Apr 29 00:10:32.034: INFO: stdout: "I0429 00:10:32.023726 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/4jr 478\n" Apr 29 00:10:32.034: INFO: got output "I0429 00:10:32.023726 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/4jr 478\n" STEP: limiting log bytes Apr 29 00:10:32.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3423 --limit-bytes=1' Apr 29 00:10:32.139: INFO: stderr: "" Apr 29 00:10:32.139: INFO: stdout: "I" Apr 29 00:10:32.139: INFO: got output "I" STEP: exposing timestamps Apr 29 00:10:32.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3423 --tail=1 --timestamps' Apr 29 00:10:32.231: INFO: stderr: "" Apr 29 00:10:32.231: INFO: stdout: "2020-04-29T00:10:32.223889215Z I0429 00:10:32.223723 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/xvk7 262\n" Apr 29 00:10:32.231: INFO: got output "2020-04-29T00:10:32.223889215Z I0429 00:10:32.223723 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/xvk7 262\n" STEP: restricting to a time range Apr 29 00:10:34.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3423 --since=1s' Apr 29 00:10:34.837: INFO: stderr: "" Apr 29 00:10:34.837: INFO: stdout: "I0429 00:10:34.023794 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/bdk9 319\nI0429 00:10:34.223741 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/bcnx 534\nI0429 00:10:34.423766 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/9v7g 529\nI0429 00:10:34.623751 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/c7gl 322\nI0429 00:10:34.823749 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/f5rp 467\n" Apr 29 00:10:34.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3423 --since=24h' Apr 29 00:10:34.934: INFO: stderr: "" Apr 29 00:10:34.934: INFO: stdout: "I0429 00:10:30.023583 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/x2rr 209\nI0429 00:10:30.223848 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/v44 434\nI0429 00:10:30.423751 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/kj4 302\nI0429 00:10:30.623726 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/vzhx 293\nI0429 00:10:30.823752 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/5xz2 276\nI0429 00:10:31.023733 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/rv8 504\nI0429 00:10:31.223767 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/gqxq 338\nI0429 00:10:31.423736 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/gwmh 349\nI0429 00:10:31.623756 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/h8hh 547\nI0429 00:10:31.823738 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/msr 278\nI0429 00:10:32.023726 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/4jr 478\nI0429 00:10:32.223723 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/xvk7 262\nI0429 00:10:32.423734 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/8t9k 439\nI0429 00:10:32.623736 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/75m 407\nI0429 00:10:32.823732 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/scq 272\nI0429 00:10:33.023725 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/z6d 592\nI0429 00:10:33.223728 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/k9qb 542\nI0429 00:10:33.423752 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/8ttv 346\nI0429 00:10:33.623732 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/8rr 242\nI0429 00:10:33.823733 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/jjqt 401\nI0429 00:10:34.023794 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/bdk9 319\nI0429 00:10:34.223741 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/bcnx 534\nI0429 00:10:34.423766 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/9v7g 529\nI0429 00:10:34.623751 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/c7gl 322\nI0429 00:10:34.823749 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/f5rp 467\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 29 00:10:34.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3423' Apr 29 00:10:42.748: INFO: stderr: "" Apr 29 00:10:42.748: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:10:42.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3423" for this suite. • [SLOW TEST:15.159 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":133,"skipped":2154,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:10:42.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:10:46.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8298" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2159,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:10:46.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:10:46.943: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5741 I0429 00:10:46.967612 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5741, replica count: 1 I0429 00:10:48.018113 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 00:10:49.018456 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 00:10:50.018664 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 00:10:50.176: INFO: Created: latency-svc-5zpw7 Apr 29 00:10:50.197: INFO: Got endpoints: latency-svc-5zpw7 [79.037572ms] Apr 29 00:10:50.227: INFO: Created: latency-svc-kxk8f Apr 29 00:10:50.242: INFO: Got endpoints: latency-svc-kxk8f [44.698901ms] Apr 29 00:10:50.269: INFO: Created: latency-svc-qkxqg Apr 29 00:10:50.308: INFO: Got endpoints: latency-svc-qkxqg [110.201473ms] Apr 29 00:10:50.316: INFO: Created: latency-svc-jjvvp Apr 29 00:10:50.337: INFO: Got endpoints: latency-svc-jjvvp [139.005198ms] Apr 29 00:10:50.360: INFO: Created: latency-svc-k9kbf Apr 29 00:10:50.379: INFO: Got endpoints: latency-svc-k9kbf [181.040639ms] Apr 29 00:10:50.451: INFO: Created: latency-svc-r4qqq Apr 29 00:10:50.469: INFO: Got endpoints: latency-svc-r4qqq [270.460949ms] Apr 29 00:10:50.492: INFO: Created: latency-svc-hn9kr Apr 29 00:10:50.511: INFO: Got endpoints: latency-svc-hn9kr [313.391592ms] Apr 29 00:10:50.539: INFO: Created: latency-svc-sfn5s Apr 29 00:10:50.578: INFO: Got endpoints: latency-svc-sfn5s [379.444813ms] Apr 29 00:10:50.599: INFO: Created: latency-svc-8bjdh Apr 29 00:10:50.618: INFO: Got endpoints: latency-svc-8bjdh [419.815397ms] Apr 29 00:10:50.640: INFO: Created: latency-svc-qhjq6 Apr 29 00:10:50.660: INFO: Got endpoints: latency-svc-qhjq6 [461.990438ms] Apr 29 00:10:50.709: INFO: Created: latency-svc-z4sr2 Apr 29 00:10:50.726: INFO: Got endpoints: latency-svc-z4sr2 [527.649036ms] Apr 29 00:10:50.744: INFO: Created: latency-svc-z8827 Apr 29 00:10:50.756: INFO: Got endpoints: latency-svc-z8827 [557.210607ms] Apr 29 00:10:50.780: INFO: Created: latency-svc-2ww7p Apr 29 00:10:50.791: INFO: Got endpoints: latency-svc-2ww7p [592.799433ms] Apr 29 00:10:50.847: INFO: Created: latency-svc-4qrvh Apr 29 00:10:50.875: INFO: Got endpoints: latency-svc-4qrvh [676.673817ms] Apr 29 00:10:50.917: INFO: Created: latency-svc-f84xz Apr 29 00:10:50.930: INFO: Got endpoints: latency-svc-f84xz [732.074791ms] Apr 29 00:10:50.985: INFO: Created: latency-svc-6kqks Apr 29 00:10:51.027: INFO: Got endpoints: latency-svc-6kqks [828.685934ms] Apr 29 00:10:51.027: INFO: Created: latency-svc-hw6pk Apr 29 00:10:51.062: INFO: Got endpoints: latency-svc-hw6pk [819.43814ms] Apr 29 00:10:51.078: INFO: Created: latency-svc-5nr87 Apr 29 00:10:51.122: INFO: Got endpoints: latency-svc-5nr87 [814.363116ms] Apr 29 00:10:51.144: INFO: Created: latency-svc-4gr44 Apr 29 00:10:51.158: INFO: Got endpoints: latency-svc-4gr44 [820.827533ms] Apr 29 00:10:51.175: INFO: Created: latency-svc-gsvt4 Apr 29 00:10:51.194: INFO: Got endpoints: latency-svc-gsvt4 [815.556637ms] Apr 29 00:10:51.212: INFO: Created: latency-svc-qzjf6 Apr 29 00:10:51.260: INFO: Got endpoints: latency-svc-qzjf6 [791.310696ms] Apr 29 00:10:51.271: INFO: Created: latency-svc-c4mb5 Apr 29 00:10:51.282: INFO: Got endpoints: latency-svc-c4mb5 [770.961906ms] Apr 29 00:10:51.300: INFO: Created: latency-svc-8ctpd Apr 29 00:10:51.318: INFO: Got endpoints: latency-svc-8ctpd [740.821736ms] Apr 29 00:10:51.336: INFO: Created: latency-svc-gqflg Apr 29 00:10:51.355: INFO: Got endpoints: latency-svc-gqflg [737.099235ms] Apr 29 00:10:51.392: INFO: Created: latency-svc-c2fbc Apr 29 00:10:51.396: INFO: Got endpoints: latency-svc-c2fbc [735.983115ms] Apr 29 00:10:51.415: INFO: Created: latency-svc-sb2n9 Apr 29 00:10:51.432: INFO: Got endpoints: latency-svc-sb2n9 [706.606259ms] Apr 29 00:10:51.445: INFO: Created: latency-svc-n8zbq Apr 29 00:10:51.463: INFO: Got endpoints: latency-svc-n8zbq [706.947871ms] Apr 29 00:10:51.475: INFO: Created: latency-svc-dq6sf Apr 29 00:10:51.486: INFO: Got endpoints: latency-svc-dq6sf [695.308263ms] Apr 29 00:10:51.530: INFO: Created: latency-svc-hffjv Apr 29 00:10:51.541: INFO: Got endpoints: latency-svc-hffjv [666.32932ms] Apr 29 00:10:51.559: INFO: Created: latency-svc-llpj4 Apr 29 00:10:51.571: INFO: Got endpoints: latency-svc-llpj4 [641.313763ms] Apr 29 00:10:51.603: INFO: Created: latency-svc-2t4vf Apr 29 00:10:51.619: INFO: Got endpoints: latency-svc-2t4vf [592.418527ms] Apr 29 00:10:51.716: INFO: Created: latency-svc-vwzn7 Apr 29 00:10:51.733: INFO: Got endpoints: latency-svc-vwzn7 [671.542513ms] Apr 29 00:10:51.762: INFO: Created: latency-svc-dd26x Apr 29 00:10:51.816: INFO: Got endpoints: latency-svc-dd26x [694.228448ms] Apr 29 00:10:51.853: INFO: Created: latency-svc-grz4m Apr 29 00:10:51.871: INFO: Got endpoints: latency-svc-grz4m [712.943065ms] Apr 29 00:10:51.890: INFO: Created: latency-svc-254sz Apr 29 00:10:51.909: INFO: Got endpoints: latency-svc-254sz [714.886203ms] Apr 29 00:10:51.967: INFO: Created: latency-svc-t2zzx Apr 29 00:10:51.984: INFO: Got endpoints: latency-svc-t2zzx [723.641119ms] Apr 29 00:10:52.002: INFO: Created: latency-svc-44944 Apr 29 00:10:52.019: INFO: Got endpoints: latency-svc-44944 [736.837028ms] Apr 29 00:10:52.038: INFO: Created: latency-svc-xj8gr Apr 29 00:10:52.068: INFO: Got endpoints: latency-svc-xj8gr [749.615152ms] Apr 29 00:10:52.087: INFO: Created: latency-svc-q2kqg Apr 29 00:10:52.111: INFO: Got endpoints: latency-svc-q2kqg [756.212089ms] Apr 29 00:10:52.148: INFO: Created: latency-svc-9k6dk Apr 29 00:10:52.157: INFO: Got endpoints: latency-svc-9k6dk [761.351494ms] Apr 29 00:10:52.221: INFO: Created: latency-svc-rtzwv Apr 29 00:10:52.248: INFO: Got endpoints: latency-svc-rtzwv [815.847669ms] Apr 29 00:10:52.280: INFO: Created: latency-svc-7pkms Apr 29 00:10:52.326: INFO: Got endpoints: latency-svc-7pkms [863.109076ms] Apr 29 00:10:52.340: INFO: Created: latency-svc-84vbz Apr 29 00:10:52.359: INFO: Got endpoints: latency-svc-84vbz [872.163698ms] Apr 29 00:10:52.394: INFO: Created: latency-svc-l2kxv Apr 29 00:10:52.457: INFO: Got endpoints: latency-svc-l2kxv [915.996884ms] Apr 29 00:10:52.488: INFO: Created: latency-svc-gtt9b Apr 29 00:10:52.512: INFO: Got endpoints: latency-svc-gtt9b [940.494991ms] Apr 29 00:10:52.531: INFO: Created: latency-svc-ccbbp Apr 29 00:10:52.549: INFO: Got endpoints: latency-svc-ccbbp [929.895691ms] Apr 29 00:10:52.595: INFO: Created: latency-svc-krr47 Apr 29 00:10:52.615: INFO: Got endpoints: latency-svc-krr47 [881.564955ms] Apr 29 00:10:52.650: INFO: Created: latency-svc-jmtlc Apr 29 00:10:52.674: INFO: Got endpoints: latency-svc-jmtlc [857.301049ms] Apr 29 00:10:52.747: INFO: Created: latency-svc-5zkd9 Apr 29 00:10:52.749: INFO: Got endpoints: latency-svc-5zkd9 [878.704314ms] Apr 29 00:10:52.783: INFO: Created: latency-svc-km2sm Apr 29 00:10:52.798: INFO: Got endpoints: latency-svc-km2sm [888.757803ms] Apr 29 00:10:52.820: INFO: Created: latency-svc-r9p25 Apr 29 00:10:52.883: INFO: Got endpoints: latency-svc-r9p25 [898.614245ms] Apr 29 00:10:52.907: INFO: Created: latency-svc-vbhfk Apr 29 00:10:52.918: INFO: Got endpoints: latency-svc-vbhfk [898.793701ms] Apr 29 00:10:52.962: INFO: Created: latency-svc-srkql Apr 29 00:10:52.972: INFO: Got endpoints: latency-svc-srkql [903.850032ms] Apr 29 00:10:53.002: INFO: Created: latency-svc-j4bbg Apr 29 00:10:53.008: INFO: Got endpoints: latency-svc-j4bbg [896.829456ms] Apr 29 00:10:53.047: INFO: Created: latency-svc-w26hd Apr 29 00:10:53.056: INFO: Got endpoints: latency-svc-w26hd [898.116773ms] Apr 29 00:10:53.101: INFO: Created: latency-svc-pvwvf Apr 29 00:10:53.152: INFO: Got endpoints: latency-svc-pvwvf [903.869743ms] Apr 29 00:10:53.178: INFO: Created: latency-svc-phwt4 Apr 29 00:10:53.194: INFO: Got endpoints: latency-svc-phwt4 [868.593895ms] Apr 29 00:10:53.219: INFO: Created: latency-svc-2qzm2 Apr 29 00:10:53.231: INFO: Got endpoints: latency-svc-2qzm2 [872.063325ms] Apr 29 00:10:53.244: INFO: Created: latency-svc-dxwhc Apr 29 00:10:53.294: INFO: Got endpoints: latency-svc-dxwhc [836.568486ms] Apr 29 00:10:53.305: INFO: Created: latency-svc-gggzn Apr 29 00:10:53.321: INFO: Got endpoints: latency-svc-gggzn [809.02162ms] Apr 29 00:10:53.347: INFO: Created: latency-svc-6ns55 Apr 29 00:10:53.433: INFO: Got endpoints: latency-svc-6ns55 [884.082526ms] Apr 29 00:10:53.435: INFO: Created: latency-svc-598r5 Apr 29 00:10:53.452: INFO: Got endpoints: latency-svc-598r5 [837.225388ms] Apr 29 00:10:53.478: INFO: Created: latency-svc-r9gkm Apr 29 00:10:53.494: INFO: Got endpoints: latency-svc-r9gkm [819.694339ms] Apr 29 00:10:53.521: INFO: Created: latency-svc-nhpbd Apr 29 00:10:53.577: INFO: Got endpoints: latency-svc-nhpbd [827.530269ms] Apr 29 00:10:53.605: INFO: Created: latency-svc-nlfx9 Apr 29 00:10:53.619: INFO: Got endpoints: latency-svc-nlfx9 [821.019717ms] Apr 29 00:10:53.639: INFO: Created: latency-svc-frhd4 Apr 29 00:10:53.656: INFO: Got endpoints: latency-svc-frhd4 [773.120021ms] Apr 29 00:10:53.715: INFO: Created: latency-svc-pgvbw Apr 29 00:10:53.760: INFO: Got endpoints: latency-svc-pgvbw [842.471237ms] Apr 29 00:10:53.786: INFO: Created: latency-svc-vkhpc Apr 29 00:10:53.799: INFO: Got endpoints: latency-svc-vkhpc [826.746998ms] Apr 29 00:10:53.858: INFO: Created: latency-svc-xhmrn Apr 29 00:10:53.865: INFO: Got endpoints: latency-svc-xhmrn [856.980121ms] Apr 29 00:10:53.909: INFO: Created: latency-svc-zzbq7 Apr 29 00:10:53.933: INFO: Got endpoints: latency-svc-zzbq7 [877.777219ms] Apr 29 00:10:53.996: INFO: Created: latency-svc-ldwjk Apr 29 00:10:54.004: INFO: Got endpoints: latency-svc-ldwjk [851.876629ms] Apr 29 00:10:54.025: INFO: Created: latency-svc-wkh25 Apr 29 00:10:54.041: INFO: Got endpoints: latency-svc-wkh25 [846.523128ms] Apr 29 00:10:54.061: INFO: Created: latency-svc-drdg4 Apr 29 00:10:54.150: INFO: Created: latency-svc-lktgh Apr 29 00:10:54.153: INFO: Got endpoints: latency-svc-drdg4 [921.742563ms] Apr 29 00:10:54.166: INFO: Got endpoints: latency-svc-lktgh [871.655281ms] Apr 29 00:10:54.197: INFO: Created: latency-svc-cqk8m Apr 29 00:10:54.207: INFO: Got endpoints: latency-svc-cqk8m [886.525831ms] Apr 29 00:10:54.235: INFO: Created: latency-svc-phgr5 Apr 29 00:10:54.290: INFO: Got endpoints: latency-svc-phgr5 [856.454289ms] Apr 29 00:10:54.317: INFO: Created: latency-svc-h6sp9 Apr 29 00:10:54.332: INFO: Got endpoints: latency-svc-h6sp9 [879.830693ms] Apr 29 00:10:54.372: INFO: Created: latency-svc-bfjfw Apr 29 00:10:54.410: INFO: Got endpoints: latency-svc-bfjfw [916.757207ms] Apr 29 00:10:54.481: INFO: Created: latency-svc-r55nt Apr 29 00:10:54.500: INFO: Got endpoints: latency-svc-r55nt [922.846121ms] Apr 29 00:10:54.565: INFO: Created: latency-svc-n98p6 Apr 29 00:10:54.725: INFO: Got endpoints: latency-svc-n98p6 [1.10575964s] Apr 29 00:10:54.737: INFO: Created: latency-svc-lrdmr Apr 29 00:10:55.128: INFO: Got endpoints: latency-svc-lrdmr [1.472243814s] Apr 29 00:10:55.152: INFO: Created: latency-svc-xltt6 Apr 29 00:10:55.171: INFO: Got endpoints: latency-svc-xltt6 [1.410559959s] Apr 29 00:10:55.212: INFO: Created: latency-svc-b2j85 Apr 29 00:10:55.226: INFO: Got endpoints: latency-svc-b2j85 [1.427082803s] Apr 29 00:10:55.302: INFO: Created: latency-svc-6r9nd Apr 29 00:10:55.322: INFO: Got endpoints: latency-svc-6r9nd [1.456879723s] Apr 29 00:10:55.337: INFO: Created: latency-svc-kf6p5 Apr 29 00:10:55.346: INFO: Got endpoints: latency-svc-kf6p5 [1.412138861s] Apr 29 00:10:55.405: INFO: Created: latency-svc-p2b4d Apr 29 00:10:55.408: INFO: Got endpoints: latency-svc-p2b4d [1.404433689s] Apr 29 00:10:55.465: INFO: Created: latency-svc-hn2mq Apr 29 00:10:55.478: INFO: Got endpoints: latency-svc-hn2mq [1.436705222s] Apr 29 00:10:55.498: INFO: Created: latency-svc-8gbhv Apr 29 00:10:55.547: INFO: Got endpoints: latency-svc-8gbhv [1.394421886s] Apr 29 00:10:55.571: INFO: Created: latency-svc-5cnkz Apr 29 00:10:55.580: INFO: Got endpoints: latency-svc-5cnkz [1.413963591s] Apr 29 00:10:55.607: INFO: Created: latency-svc-fbqrc Apr 29 00:10:55.627: INFO: Got endpoints: latency-svc-fbqrc [1.419330907s] Apr 29 00:10:55.644: INFO: Created: latency-svc-zzpxw Apr 29 00:10:55.667: INFO: Got endpoints: latency-svc-zzpxw [1.377041885s] Apr 29 00:10:55.686: INFO: Created: latency-svc-qmmmq Apr 29 00:10:55.704: INFO: Got endpoints: latency-svc-qmmmq [1.371648387s] Apr 29 00:10:55.726: INFO: Created: latency-svc-qdfzd Apr 29 00:10:55.792: INFO: Got endpoints: latency-svc-qdfzd [1.38197603s] Apr 29 00:10:55.823: INFO: Created: latency-svc-gr4tm Apr 29 00:10:55.849: INFO: Got endpoints: latency-svc-gr4tm [1.348560673s] Apr 29 00:10:55.878: INFO: Created: latency-svc-l8vrx Apr 29 00:10:55.931: INFO: Got endpoints: latency-svc-l8vrx [1.205973821s] Apr 29 00:10:55.932: INFO: Created: latency-svc-wlptk Apr 29 00:10:55.972: INFO: Got endpoints: latency-svc-wlptk [844.112967ms] Apr 29 00:10:55.996: INFO: Created: latency-svc-dwk6g Apr 29 00:10:56.005: INFO: Got endpoints: latency-svc-dwk6g [833.548165ms] Apr 29 00:10:56.020: INFO: Created: latency-svc-v8496 Apr 29 00:10:56.028: INFO: Got endpoints: latency-svc-v8496 [802.390085ms] Apr 29 00:10:56.075: INFO: Created: latency-svc-8ppwd Apr 29 00:10:56.099: INFO: Got endpoints: latency-svc-8ppwd [777.28873ms] Apr 29 00:10:56.136: INFO: Created: latency-svc-fcbp4 Apr 29 00:10:56.155: INFO: Got endpoints: latency-svc-fcbp4 [809.129418ms] Apr 29 00:10:56.243: INFO: Created: latency-svc-dwbl7 Apr 29 00:10:56.290: INFO: Got endpoints: latency-svc-dwbl7 [881.66849ms] Apr 29 00:10:56.291: INFO: Created: latency-svc-pqk5c Apr 29 00:10:56.327: INFO: Got endpoints: latency-svc-pqk5c [849.253062ms] Apr 29 00:10:56.375: INFO: Created: latency-svc-wfknh Apr 29 00:10:56.423: INFO: Got endpoints: latency-svc-wfknh [876.032457ms] Apr 29 00:10:56.464: INFO: Created: latency-svc-78q6q Apr 29 00:10:56.494: INFO: Got endpoints: latency-svc-78q6q [914.122376ms] Apr 29 00:10:56.518: INFO: Created: latency-svc-rpprc Apr 29 00:10:56.537: INFO: Got endpoints: latency-svc-rpprc [910.133302ms] Apr 29 00:10:56.716: INFO: Created: latency-svc-qhskj Apr 29 00:10:56.741: INFO: Created: latency-svc-54zm8 Apr 29 00:10:56.741: INFO: Got endpoints: latency-svc-qhskj [1.074254815s] Apr 29 00:10:56.771: INFO: Got endpoints: latency-svc-54zm8 [1.066939001s] Apr 29 00:10:57.051: INFO: Created: latency-svc-jlzq4 Apr 29 00:10:57.082: INFO: Got endpoints: latency-svc-jlzq4 [1.289740522s] Apr 29 00:10:57.083: INFO: Created: latency-svc-74hzw Apr 29 00:10:57.100: INFO: Got endpoints: latency-svc-74hzw [1.251756397s] Apr 29 00:10:57.123: INFO: Created: latency-svc-mztnn Apr 29 00:10:57.147: INFO: Got endpoints: latency-svc-mztnn [1.216112529s] Apr 29 00:10:57.212: INFO: Created: latency-svc-q5dkd Apr 29 00:10:57.227: INFO: Got endpoints: latency-svc-q5dkd [1.254267845s] Apr 29 00:10:57.256: INFO: Created: latency-svc-5nbg7 Apr 29 00:10:57.278: INFO: Got endpoints: latency-svc-5nbg7 [1.272802472s] Apr 29 00:10:57.300: INFO: Created: latency-svc-w4dvs Apr 29 00:10:57.368: INFO: Got endpoints: latency-svc-w4dvs [1.339131322s] Apr 29 00:10:57.394: INFO: Created: latency-svc-xtwbq Apr 29 00:10:57.407: INFO: Got endpoints: latency-svc-xtwbq [1.30755931s] Apr 29 00:10:57.424: INFO: Created: latency-svc-58qqq Apr 29 00:10:57.443: INFO: Got endpoints: latency-svc-58qqq [1.287770859s] Apr 29 00:10:57.539: INFO: Created: latency-svc-zkrfl Apr 29 00:10:57.544: INFO: Got endpoints: latency-svc-zkrfl [1.253922611s] Apr 29 00:10:57.569: INFO: Created: latency-svc-qp2kq Apr 29 00:10:57.579: INFO: Got endpoints: latency-svc-qp2kq [1.251864765s] Apr 29 00:10:57.606: INFO: Created: latency-svc-vrrk7 Apr 29 00:10:57.621: INFO: Got endpoints: latency-svc-vrrk7 [1.197819785s] Apr 29 00:10:57.655: INFO: Created: latency-svc-kz22v Apr 29 00:10:57.676: INFO: Created: latency-svc-flktg Apr 29 00:10:57.676: INFO: Got endpoints: latency-svc-kz22v [1.18228292s] Apr 29 00:10:57.699: INFO: Got endpoints: latency-svc-flktg [1.161934688s] Apr 29 00:10:57.713: INFO: Created: latency-svc-fhtcs Apr 29 00:10:57.723: INFO: Got endpoints: latency-svc-fhtcs [981.427656ms] Apr 29 00:10:57.736: INFO: Created: latency-svc-78h48 Apr 29 00:10:57.793: INFO: Got endpoints: latency-svc-78h48 [1.021652035s] Apr 29 00:10:57.819: INFO: Created: latency-svc-wp9r8 Apr 29 00:10:57.846: INFO: Got endpoints: latency-svc-wp9r8 [763.914462ms] Apr 29 00:10:57.873: INFO: Created: latency-svc-s2lfz Apr 29 00:10:57.886: INFO: Got endpoints: latency-svc-s2lfz [785.208031ms] Apr 29 00:10:57.937: INFO: Created: latency-svc-7vb8j Apr 29 00:10:58.000: INFO: Created: latency-svc-fm4m4 Apr 29 00:10:58.001: INFO: Got endpoints: latency-svc-7vb8j [853.285054ms] Apr 29 00:10:58.031: INFO: Got endpoints: latency-svc-fm4m4 [803.917131ms] Apr 29 00:10:58.068: INFO: Created: latency-svc-wmxnz Apr 29 00:10:58.078: INFO: Got endpoints: latency-svc-wmxnz [800.15658ms] Apr 29 00:10:58.113: INFO: Created: latency-svc-hn9pt Apr 29 00:10:58.126: INFO: Got endpoints: latency-svc-hn9pt [758.084392ms] Apr 29 00:10:58.156: INFO: Created: latency-svc-kklhr Apr 29 00:10:58.194: INFO: Got endpoints: latency-svc-kklhr [787.002897ms] Apr 29 00:10:58.216: INFO: Created: latency-svc-26m2k Apr 29 00:10:58.226: INFO: Got endpoints: latency-svc-26m2k [783.260463ms] Apr 29 00:10:58.252: INFO: Created: latency-svc-r48bs Apr 29 00:10:58.268: INFO: Got endpoints: latency-svc-r48bs [723.739631ms] Apr 29 00:10:58.286: INFO: Created: latency-svc-ndbgp Apr 29 00:10:58.349: INFO: Got endpoints: latency-svc-ndbgp [770.394299ms] Apr 29 00:10:58.352: INFO: Created: latency-svc-d8n5b Apr 29 00:10:58.358: INFO: Got endpoints: latency-svc-d8n5b [736.534647ms] Apr 29 00:10:58.390: INFO: Created: latency-svc-t8nc9 Apr 29 00:10:58.418: INFO: Got endpoints: latency-svc-t8nc9 [741.743531ms] Apr 29 00:10:58.439: INFO: Created: latency-svc-v2fdk Apr 29 00:10:58.499: INFO: Got endpoints: latency-svc-v2fdk [800.267297ms] Apr 29 00:10:58.508: INFO: Created: latency-svc-d98ft Apr 29 00:10:58.526: INFO: Got endpoints: latency-svc-d98ft [803.183671ms] Apr 29 00:10:58.551: INFO: Created: latency-svc-bndc7 Apr 29 00:10:58.563: INFO: Got endpoints: latency-svc-bndc7 [769.889778ms] Apr 29 00:10:58.582: INFO: Created: latency-svc-spzl7 Apr 29 00:10:58.599: INFO: Got endpoints: latency-svc-spzl7 [752.773767ms] Apr 29 00:10:58.643: INFO: Created: latency-svc-mjtxx Apr 29 00:10:58.653: INFO: Got endpoints: latency-svc-mjtxx [767.369337ms] Apr 29 00:10:58.672: INFO: Created: latency-svc-pqfpj Apr 29 00:10:58.689: INFO: Got endpoints: latency-svc-pqfpj [688.141603ms] Apr 29 00:10:58.706: INFO: Created: latency-svc-7hlmm Apr 29 00:10:58.725: INFO: Got endpoints: latency-svc-7hlmm [694.690009ms] Apr 29 00:10:58.742: INFO: Created: latency-svc-f4gr5 Apr 29 00:10:58.788: INFO: Got endpoints: latency-svc-f4gr5 [709.843079ms] Apr 29 00:10:58.809: INFO: Created: latency-svc-g649x Apr 29 00:10:58.821: INFO: Got endpoints: latency-svc-g649x [695.233714ms] Apr 29 00:10:58.846: INFO: Created: latency-svc-nz8qw Apr 29 00:10:58.862: INFO: Got endpoints: latency-svc-nz8qw [667.735534ms] Apr 29 00:10:58.876: INFO: Created: latency-svc-kl85j Apr 29 00:10:58.888: INFO: Got endpoints: latency-svc-kl85j [662.468442ms] Apr 29 00:10:58.930: INFO: Created: latency-svc-d5wkr Apr 29 00:10:58.952: INFO: Got endpoints: latency-svc-d5wkr [684.484937ms] Apr 29 00:10:58.953: INFO: Created: latency-svc-2trrr Apr 29 00:10:58.983: INFO: Got endpoints: latency-svc-2trrr [633.13783ms] Apr 29 00:10:59.063: INFO: Created: latency-svc-wwgsm Apr 29 00:10:59.080: INFO: Got endpoints: latency-svc-wwgsm [722.000494ms] Apr 29 00:10:59.080: INFO: Created: latency-svc-n86n2 Apr 29 00:10:59.089: INFO: Got endpoints: latency-svc-n86n2 [670.90151ms] Apr 29 00:10:59.104: INFO: Created: latency-svc-5rkwm Apr 29 00:10:59.113: INFO: Got endpoints: latency-svc-5rkwm [613.776154ms] Apr 29 00:10:59.133: INFO: Created: latency-svc-526vm Apr 29 00:10:59.150: INFO: Got endpoints: latency-svc-526vm [624.357726ms] Apr 29 00:10:59.212: INFO: Created: latency-svc-v6qz9 Apr 29 00:10:59.258: INFO: Got endpoints: latency-svc-v6qz9 [695.672022ms] Apr 29 00:10:59.259: INFO: Created: latency-svc-d9kd2 Apr 29 00:10:59.296: INFO: Got endpoints: latency-svc-d9kd2 [696.501112ms] Apr 29 00:10:59.361: INFO: Created: latency-svc-nm2kg Apr 29 00:10:59.372: INFO: Got endpoints: latency-svc-nm2kg [718.770601ms] Apr 29 00:10:59.390: INFO: Created: latency-svc-rwj5z Apr 29 00:10:59.408: INFO: Got endpoints: latency-svc-rwj5z [718.844005ms] Apr 29 00:10:59.432: INFO: Created: latency-svc-qj2ll Apr 29 00:10:59.456: INFO: Got endpoints: latency-svc-qj2ll [730.85037ms] Apr 29 00:10:59.494: INFO: Created: latency-svc-266zk Apr 29 00:10:59.504: INFO: Got endpoints: latency-svc-266zk [716.123878ms] Apr 29 00:10:59.530: INFO: Created: latency-svc-sgcm2 Apr 29 00:10:59.554: INFO: Got endpoints: latency-svc-sgcm2 [732.87152ms] Apr 29 00:10:59.572: INFO: Created: latency-svc-b7p42 Apr 29 00:10:59.582: INFO: Got endpoints: latency-svc-b7p42 [719.898739ms] Apr 29 00:10:59.613: INFO: Created: latency-svc-f4gq8 Apr 29 00:10:59.616: INFO: Got endpoints: latency-svc-f4gq8 [727.57046ms] Apr 29 00:10:59.655: INFO: Created: latency-svc-m2nnz Apr 29 00:10:59.671: INFO: Got endpoints: latency-svc-m2nnz [718.21929ms] Apr 29 00:10:59.697: INFO: Created: latency-svc-llmvg Apr 29 00:10:59.712: INFO: Got endpoints: latency-svc-llmvg [729.810833ms] Apr 29 00:10:59.769: INFO: Created: latency-svc-ftmbv Apr 29 00:10:59.787: INFO: Got endpoints: latency-svc-ftmbv [707.72984ms] Apr 29 00:10:59.788: INFO: Created: latency-svc-4vkn8 Apr 29 00:10:59.823: INFO: Got endpoints: latency-svc-4vkn8 [733.703657ms] Apr 29 00:10:59.853: INFO: Created: latency-svc-hf884 Apr 29 00:10:59.864: INFO: Got endpoints: latency-svc-hf884 [750.551088ms] Apr 29 00:10:59.906: INFO: Created: latency-svc-q65cs Apr 29 00:10:59.923: INFO: Got endpoints: latency-svc-q65cs [772.680033ms] Apr 29 00:10:59.944: INFO: Created: latency-svc-h8jxr Apr 29 00:10:59.959: INFO: Got endpoints: latency-svc-h8jxr [700.790144ms] Apr 29 00:10:59.980: INFO: Created: latency-svc-4qk4p Apr 29 00:11:00.001: INFO: Got endpoints: latency-svc-4qk4p [705.398736ms] Apr 29 00:11:00.051: INFO: Created: latency-svc-5zg52 Apr 29 00:11:00.067: INFO: Got endpoints: latency-svc-5zg52 [695.34956ms] Apr 29 00:11:00.080: INFO: Created: latency-svc-qf889 Apr 29 00:11:00.091: INFO: Got endpoints: latency-svc-qf889 [683.003297ms] Apr 29 00:11:00.123: INFO: Created: latency-svc-w5lwp Apr 29 00:11:00.138: INFO: Got endpoints: latency-svc-w5lwp [681.596433ms] Apr 29 00:11:00.188: INFO: Created: latency-svc-sqg2g Apr 29 00:11:00.214: INFO: Got endpoints: latency-svc-sqg2g [709.760953ms] Apr 29 00:11:00.216: INFO: Created: latency-svc-4br4s Apr 29 00:11:00.238: INFO: Got endpoints: latency-svc-4br4s [683.90314ms] Apr 29 00:11:00.278: INFO: Created: latency-svc-bvmv9 Apr 29 00:11:00.307: INFO: Got endpoints: latency-svc-bvmv9 [725.641977ms] Apr 29 00:11:00.326: INFO: Created: latency-svc-4vqjj Apr 29 00:11:00.342: INFO: Got endpoints: latency-svc-4vqjj [725.805159ms] Apr 29 00:11:00.362: INFO: Created: latency-svc-5lb9r Apr 29 00:11:00.378: INFO: Got endpoints: latency-svc-5lb9r [707.455646ms] Apr 29 00:11:00.453: INFO: Created: latency-svc-rjpp6 Apr 29 00:11:00.474: INFO: Got endpoints: latency-svc-rjpp6 [761.367659ms] Apr 29 00:11:00.502: INFO: Created: latency-svc-vvgnb Apr 29 00:11:00.529: INFO: Got endpoints: latency-svc-vvgnb [741.438478ms] Apr 29 00:11:00.565: INFO: Created: latency-svc-nzlbv Apr 29 00:11:00.576: INFO: Got endpoints: latency-svc-nzlbv [753.678484ms] Apr 29 00:11:00.591: INFO: Created: latency-svc-z7rv7 Apr 29 00:11:00.600: INFO: Got endpoints: latency-svc-z7rv7 [736.429524ms] Apr 29 00:11:00.615: INFO: Created: latency-svc-zn2cn Apr 29 00:11:00.636: INFO: Got endpoints: latency-svc-zn2cn [713.00048ms] Apr 29 00:11:00.652: INFO: Created: latency-svc-hqppc Apr 29 00:11:00.721: INFO: Got endpoints: latency-svc-hqppc [761.935091ms] Apr 29 00:11:00.735: INFO: Created: latency-svc-w4glm Apr 29 00:11:00.750: INFO: Got endpoints: latency-svc-w4glm [749.196748ms] Apr 29 00:11:00.807: INFO: Created: latency-svc-gnwt7 Apr 29 00:11:00.864: INFO: Got endpoints: latency-svc-gnwt7 [797.151601ms] Apr 29 00:11:00.891: INFO: Created: latency-svc-lzvxm Apr 29 00:11:00.910: INFO: Got endpoints: latency-svc-lzvxm [819.455634ms] Apr 29 00:11:00.933: INFO: Created: latency-svc-dxs4v Apr 29 00:11:00.947: INFO: Got endpoints: latency-svc-dxs4v [808.692317ms] Apr 29 00:11:00.996: INFO: Created: latency-svc-w5d2g Apr 29 00:11:01.011: INFO: Got endpoints: latency-svc-w5d2g [797.459549ms] Apr 29 00:11:01.035: INFO: Created: latency-svc-pxz2t Apr 29 00:11:01.049: INFO: Got endpoints: latency-svc-pxz2t [810.931722ms] Apr 29 00:11:01.070: INFO: Created: latency-svc-hfhg5 Apr 29 00:11:01.085: INFO: Got endpoints: latency-svc-hfhg5 [777.280288ms] Apr 29 00:11:01.134: INFO: Created: latency-svc-jtqvr Apr 29 00:11:01.148: INFO: Got endpoints: latency-svc-jtqvr [806.034783ms] Apr 29 00:11:01.173: INFO: Created: latency-svc-pp9p8 Apr 29 00:11:01.187: INFO: Got endpoints: latency-svc-pp9p8 [808.350907ms] Apr 29 00:11:01.209: INFO: Created: latency-svc-bnpwj Apr 29 00:11:01.223: INFO: Got endpoints: latency-svc-bnpwj [749.218858ms] Apr 29 00:11:01.284: INFO: Created: latency-svc-6dk8g Apr 29 00:11:01.311: INFO: Got endpoints: latency-svc-6dk8g [781.948877ms] Apr 29 00:11:01.371: INFO: Created: latency-svc-gf629 Apr 29 00:11:01.445: INFO: Got endpoints: latency-svc-gf629 [869.014641ms] Apr 29 00:11:01.449: INFO: Created: latency-svc-b2wgz Apr 29 00:11:01.469: INFO: Got endpoints: latency-svc-b2wgz [868.991291ms] Apr 29 00:11:01.491: INFO: Created: latency-svc-qj97s Apr 29 00:11:01.534: INFO: Got endpoints: latency-svc-qj97s [897.40897ms] Apr 29 00:11:01.613: INFO: Created: latency-svc-84bsw Apr 29 00:11:01.625: INFO: Got endpoints: latency-svc-84bsw [903.651076ms] Apr 29 00:11:01.670: INFO: Created: latency-svc-gthdq Apr 29 00:11:01.745: INFO: Got endpoints: latency-svc-gthdq [994.626229ms] Apr 29 00:11:01.925: INFO: Created: latency-svc-5d9qx Apr 29 00:11:01.977: INFO: Created: latency-svc-gczrj Apr 29 00:11:01.977: INFO: Got endpoints: latency-svc-5d9qx [1.112864365s] Apr 29 00:11:01.996: INFO: Got endpoints: latency-svc-gczrj [1.085080316s] Apr 29 00:11:02.063: INFO: Created: latency-svc-5t96d Apr 29 00:11:02.073: INFO: Got endpoints: latency-svc-5t96d [1.126495523s] Apr 29 00:11:02.073: INFO: Latencies: [44.698901ms 110.201473ms 139.005198ms 181.040639ms 270.460949ms 313.391592ms 379.444813ms 419.815397ms 461.990438ms 527.649036ms 557.210607ms 592.418527ms 592.799433ms 613.776154ms 624.357726ms 633.13783ms 641.313763ms 662.468442ms 666.32932ms 667.735534ms 670.90151ms 671.542513ms 676.673817ms 681.596433ms 683.003297ms 683.90314ms 684.484937ms 688.141603ms 694.228448ms 694.690009ms 695.233714ms 695.308263ms 695.34956ms 695.672022ms 696.501112ms 700.790144ms 705.398736ms 706.606259ms 706.947871ms 707.455646ms 707.72984ms 709.760953ms 709.843079ms 712.943065ms 713.00048ms 714.886203ms 716.123878ms 718.21929ms 718.770601ms 718.844005ms 719.898739ms 722.000494ms 723.641119ms 723.739631ms 725.641977ms 725.805159ms 727.57046ms 729.810833ms 730.85037ms 732.074791ms 732.87152ms 733.703657ms 735.983115ms 736.429524ms 736.534647ms 736.837028ms 737.099235ms 740.821736ms 741.438478ms 741.743531ms 749.196748ms 749.218858ms 749.615152ms 750.551088ms 752.773767ms 753.678484ms 756.212089ms 758.084392ms 761.351494ms 761.367659ms 761.935091ms 763.914462ms 767.369337ms 769.889778ms 770.394299ms 770.961906ms 772.680033ms 773.120021ms 777.280288ms 777.28873ms 781.948877ms 783.260463ms 785.208031ms 787.002897ms 791.310696ms 797.151601ms 797.459549ms 800.15658ms 800.267297ms 802.390085ms 803.183671ms 803.917131ms 806.034783ms 808.350907ms 808.692317ms 809.02162ms 809.129418ms 810.931722ms 814.363116ms 815.556637ms 815.847669ms 819.43814ms 819.455634ms 819.694339ms 820.827533ms 821.019717ms 826.746998ms 827.530269ms 828.685934ms 833.548165ms 836.568486ms 837.225388ms 842.471237ms 844.112967ms 846.523128ms 849.253062ms 851.876629ms 853.285054ms 856.454289ms 856.980121ms 857.301049ms 863.109076ms 868.593895ms 868.991291ms 869.014641ms 871.655281ms 872.063325ms 872.163698ms 876.032457ms 877.777219ms 878.704314ms 879.830693ms 881.564955ms 881.66849ms 884.082526ms 886.525831ms 888.757803ms 896.829456ms 897.40897ms 898.116773ms 898.614245ms 898.793701ms 903.651076ms 903.850032ms 903.869743ms 910.133302ms 914.122376ms 915.996884ms 916.757207ms 921.742563ms 922.846121ms 929.895691ms 940.494991ms 981.427656ms 994.626229ms 1.021652035s 1.066939001s 1.074254815s 1.085080316s 1.10575964s 1.112864365s 1.126495523s 1.161934688s 1.18228292s 1.197819785s 1.205973821s 1.216112529s 1.251756397s 1.251864765s 1.253922611s 1.254267845s 1.272802472s 1.287770859s 1.289740522s 1.30755931s 1.339131322s 1.348560673s 1.371648387s 1.377041885s 1.38197603s 1.394421886s 1.404433689s 1.410559959s 1.412138861s 1.413963591s 1.419330907s 1.427082803s 1.436705222s 1.456879723s 1.472243814s] Apr 29 00:11:02.074: INFO: 50 %ile: 803.183671ms Apr 29 00:11:02.074: INFO: 90 %ile: 1.254267845s Apr 29 00:11:02.074: INFO: 99 %ile: 1.456879723s Apr 29 00:11:02.074: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:11:02.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5741" for this suite. • [SLOW TEST:15.187 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":135,"skipped":2173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:11:02.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0429 00:11:12.956075 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 29 00:11:12.956: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:11:12.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9086" for this suite. • [SLOW TEST:10.941 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":136,"skipped":2199,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:11:13.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-5936674d-0a57-44f7-bb51-01a128c93d25 STEP: Creating a pod to test consume secrets Apr 29 00:11:13.170: INFO: Waiting up to 5m0s for pod "pod-secrets-b3049183-b1a0-4c94-9a89-239b4c20fb22" in namespace "secrets-7999" to be "Succeeded or Failed" Apr 29 00:11:13.199: INFO: Pod "pod-secrets-b3049183-b1a0-4c94-9a89-239b4c20fb22": Phase="Pending", Reason="", readiness=false. Elapsed: 28.230893ms Apr 29 00:11:15.206: INFO: Pod "pod-secrets-b3049183-b1a0-4c94-9a89-239b4c20fb22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035619749s Apr 29 00:11:17.326: INFO: Pod "pod-secrets-b3049183-b1a0-4c94-9a89-239b4c20fb22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155678999s STEP: Saw pod success Apr 29 00:11:17.326: INFO: Pod "pod-secrets-b3049183-b1a0-4c94-9a89-239b4c20fb22" satisfied condition "Succeeded or Failed" Apr 29 00:11:17.332: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-b3049183-b1a0-4c94-9a89-239b4c20fb22 container secret-volume-test: STEP: delete the pod Apr 29 00:11:17.386: INFO: Waiting for pod pod-secrets-b3049183-b1a0-4c94-9a89-239b4c20fb22 to disappear Apr 29 00:11:17.391: INFO: Pod pod-secrets-b3049183-b1a0-4c94-9a89-239b4c20fb22 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:11:17.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7999" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:11:17.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:11:17.571: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 29 00:11:22.595: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 29 00:11:22.595: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 29 00:11:24.610: INFO: Creating deployment "test-rollover-deployment" Apr 29 00:11:24.701: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 29 00:11:26.733: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 29 00:11:26.782: INFO: Ensure that both replica sets have 1 created replica Apr 29 00:11:26.814: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 29 00:11:26.882: INFO: Updating deployment test-rollover-deployment Apr 29 00:11:26.882: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 29 00:11:28.953: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 29 00:11:29.013: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 29 00:11:29.308: INFO: all replica sets need to contain the pod-template-hash label Apr 29 00:11:29.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715887, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:11:31.331: INFO: all replica sets need to contain the pod-template-hash label Apr 29 00:11:31.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715890, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:11:33.351: INFO: all replica sets need to contain the pod-template-hash label Apr 29 00:11:33.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715890, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:11:35.316: INFO: all replica sets need to contain the pod-template-hash label Apr 29 00:11:35.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715890, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:11:37.316: INFO: all replica sets need to contain the pod-template-hash label Apr 29 00:11:37.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715890, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:11:39.313: INFO: all replica sets need to contain the pod-template-hash label Apr 29 00:11:39.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715890, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723715884, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:11:41.327: INFO: Apr 29 00:11:41.327: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 29 00:11:41.336: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3327 /apis/apps/v1/namespaces/deployment-3327/deployments/test-rollover-deployment d1712bc4-16cb-4485-aacc-738829839512 11849448 2 2020-04-29 00:11:24 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005444488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-29 00:11:24 +0000 UTC,LastTransitionTime:2020-04-29 00:11:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-29 00:11:40 +0000 UTC,LastTransitionTime:2020-04-29 00:11:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 29 00:11:41.338: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-3327 /apis/apps/v1/namespaces/deployment-3327/replicasets/test-rollover-deployment-78df7bc796 0d18dccf-9c5f-4bea-8a6f-4f6c98d1658c 11849437 2 2020-04-29 00:11:26 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d1712bc4-16cb-4485-aacc-738829839512 0xc005490967 0xc005490968}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0054909d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 29 00:11:41.338: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 29 00:11:41.338: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3327 /apis/apps/v1/namespaces/deployment-3327/replicasets/test-rollover-controller 8afe350d-acf6-4553-a03a-1999bbc1a818 11849447 2 2020-04-29 00:11:17 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d1712bc4-16cb-4485-aacc-738829839512 0xc00549087f 0xc005490890}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0054908f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 00:11:41.338: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-3327 /apis/apps/v1/namespaces/deployment-3327/replicasets/test-rollover-deployment-f6c94f66c d102006a-a77e-4501-a272-1611310e396d 11849175 2 2020-04-29 00:11:24 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d1712bc4-16cb-4485-aacc-738829839512 0xc005490a40 0xc005490a41}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005490ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 00:11:41.341: INFO: Pod "test-rollover-deployment-78df7bc796-p9zgg" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-p9zgg test-rollover-deployment-78df7bc796- deployment-3327 /api/v1/namespaces/deployment-3327/pods/test-rollover-deployment-78df7bc796-p9zgg e6977554-532a-41e5-8852-f5a7a4e95a32 11849264 0 2020-04-29 00:11:26 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 0d18dccf-9c5f-4bea-8a6f-4f6c98d1658c 0xc005491237 0xc005491238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ldk2n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ldk2n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ldk2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:11:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:11:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:11:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:11:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.90,StartTime:2020-04-29 00:11:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 00:11:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://fe8b3c2219360762a5242ed9c7888ff1f6120bee5b4d490005105ea0263c1fbc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:11:41.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3327" for this suite. • [SLOW TEST:23.895 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":138,"skipped":2231,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:11:41.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 29 00:11:41.618: INFO: Waiting up to 5m0s for pod "client-containers-2a7808e3-62e0-40d9-9928-e09f89eb23ba" in namespace "containers-8082" to be "Succeeded or Failed" Apr 29 00:11:41.937: INFO: Pod "client-containers-2a7808e3-62e0-40d9-9928-e09f89eb23ba": Phase="Pending", Reason="", readiness=false. Elapsed: 319.817785ms Apr 29 00:11:43.943: INFO: Pod "client-containers-2a7808e3-62e0-40d9-9928-e09f89eb23ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325092591s Apr 29 00:11:45.947: INFO: Pod "client-containers-2a7808e3-62e0-40d9-9928-e09f89eb23ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329301208s STEP: Saw pod success Apr 29 00:11:45.947: INFO: Pod "client-containers-2a7808e3-62e0-40d9-9928-e09f89eb23ba" satisfied condition "Succeeded or Failed" Apr 29 00:11:45.950: INFO: Trying to get logs from node latest-worker2 pod client-containers-2a7808e3-62e0-40d9-9928-e09f89eb23ba container test-container: STEP: delete the pod Apr 29 00:11:46.016: INFO: Waiting for pod client-containers-2a7808e3-62e0-40d9-9928-e09f89eb23ba to disappear Apr 29 00:11:46.027: INFO: Pod client-containers-2a7808e3-62e0-40d9-9928-e09f89eb23ba no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:11:46.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8082" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:11:46.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:11:46.172: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a969d007-71be-4fc5-9990-667c0f93e4b0" in namespace "security-context-test-1149" to be "Succeeded or Failed" Apr 29 00:11:46.183: INFO: Pod "busybox-user-65534-a969d007-71be-4fc5-9990-667c0f93e4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.824684ms Apr 29 00:11:48.290: INFO: Pod "busybox-user-65534-a969d007-71be-4fc5-9990-667c0f93e4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117493892s Apr 29 00:11:50.293: INFO: Pod "busybox-user-65534-a969d007-71be-4fc5-9990-667c0f93e4b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121281978s Apr 29 00:11:50.293: INFO: Pod "busybox-user-65534-a969d007-71be-4fc5-9990-667c0f93e4b0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:11:50.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1149" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:11:50.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-c3e0c5c7-1967-4e4e-b509-20d93c32ec31 STEP: Creating a pod to test consume configMaps Apr 29 00:11:50.391: INFO: Waiting up to 5m0s for pod "pod-configmaps-e09bd8e0-d93d-4619-bde3-0dc72efdbdc4" in namespace "configmap-6403" to be "Succeeded or Failed" Apr 29 00:11:50.394: INFO: Pod "pod-configmaps-e09bd8e0-d93d-4619-bde3-0dc72efdbdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.282999ms Apr 29 00:11:52.398: INFO: Pod "pod-configmaps-e09bd8e0-d93d-4619-bde3-0dc72efdbdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007421748s Apr 29 00:11:54.402: INFO: Pod "pod-configmaps-e09bd8e0-d93d-4619-bde3-0dc72efdbdc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01188767s STEP: Saw pod success Apr 29 00:11:54.403: INFO: Pod "pod-configmaps-e09bd8e0-d93d-4619-bde3-0dc72efdbdc4" satisfied condition "Succeeded or Failed" Apr 29 00:11:54.406: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e09bd8e0-d93d-4619-bde3-0dc72efdbdc4 container configmap-volume-test: STEP: delete the pod Apr 29 00:11:54.426: INFO: Waiting for pod pod-configmaps-e09bd8e0-d93d-4619-bde3-0dc72efdbdc4 to disappear Apr 29 00:11:54.444: INFO: Pod pod-configmaps-e09bd8e0-d93d-4619-bde3-0dc72efdbdc4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:11:54.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6403" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2313,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:11:54.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:11:54.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0403cd3-4f9d-4b5b-9679-b188172775c1" in namespace "downward-api-8078" to be "Succeeded or Failed" Apr 29 00:11:54.538: INFO: Pod "downwardapi-volume-f0403cd3-4f9d-4b5b-9679-b188172775c1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059383ms Apr 29 00:11:56.542: INFO: Pod "downwardapi-volume-f0403cd3-4f9d-4b5b-9679-b188172775c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014121092s Apr 29 00:11:58.546: INFO: Pod "downwardapi-volume-f0403cd3-4f9d-4b5b-9679-b188172775c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018501511s STEP: Saw pod success Apr 29 00:11:58.546: INFO: Pod "downwardapi-volume-f0403cd3-4f9d-4b5b-9679-b188172775c1" satisfied condition "Succeeded or Failed" Apr 29 00:11:58.560: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f0403cd3-4f9d-4b5b-9679-b188172775c1 container client-container: STEP: delete the pod Apr 29 00:11:58.594: INFO: Waiting for pod downwardapi-volume-f0403cd3-4f9d-4b5b-9679-b188172775c1 to disappear Apr 29 00:11:58.616: INFO: Pod downwardapi-volume-f0403cd3-4f9d-4b5b-9679-b188172775c1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:11:58.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8078" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2330,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:11:58.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 29 00:11:58.727: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 29 00:11:58.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5654' Apr 29 00:11:59.076: INFO: stderr: "" Apr 29 00:11:59.076: INFO: stdout: "service/agnhost-slave created\n" Apr 29 00:11:59.076: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 29 00:11:59.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5654' Apr 29 00:11:59.328: INFO: stderr: "" Apr 29 00:11:59.328: INFO: stdout: "service/agnhost-master created\n" Apr 29 00:11:59.328: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 29 00:11:59.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5654' Apr 29 00:11:59.621: INFO: stderr: "" Apr 29 00:11:59.621: INFO: stdout: "service/frontend created\n" Apr 29 00:11:59.621: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 29 00:11:59.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5654' Apr 29 00:11:59.883: INFO: stderr: "" Apr 29 00:11:59.883: INFO: stdout: "deployment.apps/frontend created\n" Apr 29 00:11:59.883: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 29 00:11:59.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5654' Apr 29 00:12:00.161: INFO: stderr: "" Apr 29 00:12:00.161: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 29 00:12:00.161: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 29 00:12:00.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5654' Apr 29 00:12:00.430: INFO: stderr: "" Apr 29 00:12:00.430: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 29 00:12:00.431: INFO: Waiting for all frontend pods to be Running. Apr 29 00:12:10.481: INFO: Waiting for frontend to serve content. Apr 29 00:12:10.492: INFO: Trying to add a new entry to the guestbook. Apr 29 00:12:10.504: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 29 00:12:10.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5654' Apr 29 00:12:10.714: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 00:12:10.714: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 29 00:12:10.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5654' Apr 29 00:12:10.872: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 00:12:10.872: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 29 00:12:10.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5654' Apr 29 00:12:11.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 00:12:11.034: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 29 00:12:11.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5654' Apr 29 00:12:11.141: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 00:12:11.141: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 29 00:12:11.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5654' Apr 29 00:12:11.249: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 00:12:11.249: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 29 00:12:11.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5654' Apr 29 00:12:11.343: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 00:12:11.343: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:12:11.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5654" for this suite. • [SLOW TEST:12.726 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":143,"skipped":2336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:12:11.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-5a465959-dd32-444a-8e3b-f1826ff1d377 STEP: Creating a pod to test consume configMaps Apr 29 00:12:12.301: INFO: Waiting up to 5m0s for pod "pod-configmaps-1198b347-e51e-4425-a7bf-e327da50d707" in namespace "configmap-8578" to be "Succeeded or Failed" Apr 29 00:12:12.645: INFO: Pod "pod-configmaps-1198b347-e51e-4425-a7bf-e327da50d707": Phase="Pending", Reason="", readiness=false. Elapsed: 343.041575ms Apr 29 00:12:14.665: INFO: Pod "pod-configmaps-1198b347-e51e-4425-a7bf-e327da50d707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363006429s Apr 29 00:12:16.671: INFO: Pod "pod-configmaps-1198b347-e51e-4425-a7bf-e327da50d707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.369703323s STEP: Saw pod success Apr 29 00:12:16.671: INFO: Pod "pod-configmaps-1198b347-e51e-4425-a7bf-e327da50d707" satisfied condition "Succeeded or Failed" Apr 29 00:12:16.722: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1198b347-e51e-4425-a7bf-e327da50d707 container configmap-volume-test: STEP: delete the pod Apr 29 00:12:16.795: INFO: Waiting for pod pod-configmaps-1198b347-e51e-4425-a7bf-e327da50d707 to disappear Apr 29 00:12:16.808: INFO: Pod pod-configmaps-1198b347-e51e-4425-a7bf-e327da50d707 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:12:16.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8578" for this suite. • [SLOW TEST:5.595 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2406,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:12:16.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:12:28.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8055" for this suite. • [SLOW TEST:11.316 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":145,"skipped":2415,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:12:28.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:12:28.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6021" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":146,"skipped":2432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:12:28.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3465 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 29 00:12:28.402: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 29 00:12:28.466: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 00:12:30.470: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 00:12:32.470: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:12:34.471: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:12:36.471: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:12:38.471: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:12:40.471: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:12:42.471: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:12:44.471: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 29 00:12:44.477: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 29 00:12:46.481: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 29 00:12:52.560: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.95:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3465 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 00:12:52.560: INFO: >>> kubeConfig: /root/.kube/config I0429 00:12:52.594848 7 log.go:172] (0xc0062386e0) (0xc001325040) Create stream I0429 00:12:52.594880 7 log.go:172] (0xc0062386e0) (0xc001325040) Stream added, broadcasting: 1 I0429 00:12:52.596493 7 log.go:172] (0xc0062386e0) Reply frame received for 1 I0429 00:12:52.596537 7 log.go:172] (0xc0062386e0) (0xc001734b40) Create stream I0429 00:12:52.596545 7 log.go:172] (0xc0062386e0) (0xc001734b40) Stream added, broadcasting: 3 I0429 00:12:52.597561 7 log.go:172] (0xc0062386e0) Reply frame received for 3 I0429 00:12:52.597591 7 log.go:172] (0xc0062386e0) (0xc0013250e0) Create stream I0429 00:12:52.597600 7 log.go:172] (0xc0062386e0) (0xc0013250e0) Stream added, broadcasting: 5 I0429 00:12:52.598364 7 log.go:172] (0xc0062386e0) Reply frame received for 5 I0429 00:12:52.660871 7 log.go:172] (0xc0062386e0) Data frame received for 5 I0429 00:12:52.660909 7 log.go:172] (0xc0013250e0) (5) Data frame handling I0429 00:12:52.660931 7 log.go:172] (0xc0062386e0) Data frame received for 3 I0429 00:12:52.660951 7 log.go:172] (0xc001734b40) (3) Data frame handling I0429 00:12:52.660968 7 log.go:172] (0xc001734b40) (3) Data frame sent I0429 00:12:52.660995 7 log.go:172] (0xc0062386e0) Data frame received for 3 I0429 00:12:52.661006 7 log.go:172] (0xc001734b40) (3) Data frame handling I0429 00:12:52.665375 7 log.go:172] (0xc0062386e0) Data frame received for 1 I0429 00:12:52.665405 7 log.go:172] (0xc001325040) (1) Data frame handling I0429 00:12:52.665433 7 log.go:172] (0xc001325040) (1) Data frame sent I0429 00:12:52.665458 7 log.go:172] (0xc0062386e0) (0xc001325040) Stream removed, broadcasting: 1 I0429 00:12:52.665497 7 log.go:172] (0xc0062386e0) Go away received I0429 00:12:52.665630 7 log.go:172] (0xc0062386e0) (0xc001325040) Stream removed, broadcasting: 1 I0429 00:12:52.665657 7 log.go:172] (0xc0062386e0) (0xc001734b40) Stream removed, broadcasting: 3 I0429 00:12:52.665674 7 log.go:172] (0xc0062386e0) (0xc0013250e0) Stream removed, broadcasting: 5 Apr 29 00:12:52.665: INFO: Found all expected endpoints: [netserver-0] Apr 29 00:12:52.672: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.111:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3465 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 00:12:52.672: INFO: >>> kubeConfig: /root/.kube/config I0429 00:12:52.706652 7 log.go:172] (0xc002b48a50) (0xc001734f00) Create stream I0429 00:12:52.706681 7 log.go:172] (0xc002b48a50) (0xc001734f00) Stream added, broadcasting: 1 I0429 00:12:52.708685 7 log.go:172] (0xc002b48a50) Reply frame received for 1 I0429 00:12:52.708719 7 log.go:172] (0xc002b48a50) (0xc001734fa0) Create stream I0429 00:12:52.708726 7 log.go:172] (0xc002b48a50) (0xc001734fa0) Stream added, broadcasting: 3 I0429 00:12:52.709659 7 log.go:172] (0xc002b48a50) Reply frame received for 3 I0429 00:12:52.709696 7 log.go:172] (0xc002b48a50) (0xc00225f860) Create stream I0429 00:12:52.709711 7 log.go:172] (0xc002b48a50) (0xc00225f860) Stream added, broadcasting: 5 I0429 00:12:52.710490 7 log.go:172] (0xc002b48a50) Reply frame received for 5 I0429 00:12:52.772101 7 log.go:172] (0xc002b48a50) Data frame received for 3 I0429 00:12:52.772141 7 log.go:172] (0xc001734fa0) (3) Data frame handling I0429 00:12:52.772154 7 log.go:172] (0xc001734fa0) (3) Data frame sent I0429 00:12:52.772164 7 log.go:172] (0xc002b48a50) Data frame received for 3 I0429 00:12:52.772174 7 log.go:172] (0xc001734fa0) (3) Data frame handling I0429 00:12:52.772186 7 log.go:172] (0xc002b48a50) Data frame received for 5 I0429 00:12:52.772195 7 log.go:172] (0xc00225f860) (5) Data frame handling I0429 00:12:52.773865 7 log.go:172] (0xc002b48a50) Data frame received for 1 I0429 00:12:52.773899 7 log.go:172] (0xc001734f00) (1) Data frame handling I0429 00:12:52.773917 7 log.go:172] (0xc001734f00) (1) Data frame sent I0429 00:12:52.773932 7 log.go:172] (0xc002b48a50) (0xc001734f00) Stream removed, broadcasting: 1 I0429 00:12:52.773970 7 log.go:172] (0xc002b48a50) Go away received I0429 00:12:52.774014 7 log.go:172] (0xc002b48a50) (0xc001734f00) Stream removed, broadcasting: 1 I0429 00:12:52.774039 7 log.go:172] (0xc002b48a50) (0xc001734fa0) Stream removed, broadcasting: 3 I0429 00:12:52.774056 7 log.go:172] (0xc002b48a50) (0xc00225f860) Stream removed, broadcasting: 5 Apr 29 00:12:52.774: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:12:52.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3465" for this suite. • [SLOW TEST:24.433 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2510,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:12:52.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:12:52.876: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 29 00:12:53.934: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:12:55.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5837" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":148,"skipped":2519,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:12:55.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8651 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-8651 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8651 Apr 29 00:12:55.582: INFO: Found 0 stateful pods, waiting for 1 Apr 29 00:13:05.587: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 29 00:13:05.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8651 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:13:05.861: INFO: stderr: "I0429 00:13:05.738152 1685 log.go:172] (0xc0000e4c60) (0xc000670320) Create stream\nI0429 00:13:05.738233 1685 log.go:172] (0xc0000e4c60) (0xc000670320) Stream added, broadcasting: 1\nI0429 00:13:05.742451 1685 log.go:172] (0xc0000e4c60) Reply frame received for 1\nI0429 00:13:05.742478 1685 log.go:172] (0xc0000e4c60) (0xc0005d55e0) Create stream\nI0429 00:13:05.742486 1685 log.go:172] (0xc0000e4c60) (0xc0005d55e0) Stream added, broadcasting: 3\nI0429 00:13:05.743204 1685 log.go:172] (0xc0000e4c60) Reply frame received for 3\nI0429 00:13:05.743239 1685 log.go:172] (0xc0000e4c60) (0xc0003aaa00) Create stream\nI0429 00:13:05.743254 1685 log.go:172] (0xc0000e4c60) (0xc0003aaa00) Stream added, broadcasting: 5\nI0429 00:13:05.743870 1685 log.go:172] (0xc0000e4c60) Reply frame received for 5\nI0429 00:13:05.826041 1685 log.go:172] (0xc0000e4c60) Data frame received for 5\nI0429 00:13:05.826076 1685 log.go:172] (0xc0003aaa00) (5) Data frame handling\nI0429 00:13:05.826093 1685 log.go:172] (0xc0003aaa00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:13:05.853446 1685 log.go:172] (0xc0000e4c60) Data frame received for 3\nI0429 00:13:05.853501 1685 log.go:172] (0xc0005d55e0) (3) Data frame handling\nI0429 00:13:05.853532 1685 log.go:172] (0xc0005d55e0) (3) Data frame sent\nI0429 00:13:05.853569 1685 log.go:172] (0xc0000e4c60) Data frame received for 3\nI0429 00:13:05.853605 1685 log.go:172] (0xc0005d55e0) (3) Data frame handling\nI0429 00:13:05.853911 1685 log.go:172] (0xc0000e4c60) Data frame received for 5\nI0429 00:13:05.853938 1685 log.go:172] (0xc0003aaa00) (5) Data frame handling\nI0429 00:13:05.855615 1685 log.go:172] (0xc0000e4c60) Data frame received for 1\nI0429 00:13:05.855644 1685 log.go:172] (0xc000670320) (1) Data frame handling\nI0429 00:13:05.855669 1685 log.go:172] (0xc000670320) (1) Data frame sent\nI0429 00:13:05.855697 1685 log.go:172] (0xc0000e4c60) (0xc000670320) Stream removed, broadcasting: 1\nI0429 00:13:05.855924 1685 log.go:172] (0xc0000e4c60) Go away received\nI0429 00:13:05.856099 1685 log.go:172] (0xc0000e4c60) (0xc000670320) Stream removed, broadcasting: 1\nI0429 00:13:05.856126 1685 log.go:172] (0xc0000e4c60) (0xc0005d55e0) Stream removed, broadcasting: 3\nI0429 00:13:05.856148 1685 log.go:172] (0xc0000e4c60) (0xc0003aaa00) Stream removed, broadcasting: 5\n" Apr 29 00:13:05.861: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:13:05.861: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:13:05.865: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 29 00:13:15.869: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 00:13:15.869: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 00:13:15.923: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 00:13:15.923: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC }] Apr 29 00:13:15.923: INFO: Apr 29 00:13:15.923: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 29 00:13:16.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.955943391s Apr 29 00:13:17.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.950906093s Apr 29 00:13:18.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.946358508s Apr 29 00:13:20.017: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.942047465s Apr 29 00:13:21.021: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.861709229s Apr 29 00:13:22.027: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.857428016s Apr 29 00:13:23.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.852132397s Apr 29 00:13:24.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.847957139s Apr 29 00:13:25.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 842.924849ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8651 Apr 29 00:13:26.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8651 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:13:26.300: INFO: stderr: "I0429 00:13:26.195236 1707 log.go:172] (0xc00057cb00) (0xc000570140) Create stream\nI0429 00:13:26.195283 1707 log.go:172] (0xc00057cb00) (0xc000570140) Stream added, broadcasting: 1\nI0429 00:13:26.197977 1707 log.go:172] (0xc00057cb00) Reply frame received for 1\nI0429 00:13:26.198030 1707 log.go:172] (0xc00057cb00) (0xc0007c12c0) Create stream\nI0429 00:13:26.198045 1707 log.go:172] (0xc00057cb00) (0xc0007c12c0) Stream added, broadcasting: 3\nI0429 00:13:26.199042 1707 log.go:172] (0xc00057cb00) Reply frame received for 3\nI0429 00:13:26.199087 1707 log.go:172] (0xc00057cb00) (0xc000514000) Create stream\nI0429 00:13:26.199103 1707 log.go:172] (0xc00057cb00) (0xc000514000) Stream added, broadcasting: 5\nI0429 00:13:26.200204 1707 log.go:172] (0xc00057cb00) Reply frame received for 5\nI0429 00:13:26.292946 1707 log.go:172] (0xc00057cb00) Data frame received for 3\nI0429 00:13:26.292978 1707 log.go:172] (0xc0007c12c0) (3) Data frame handling\nI0429 00:13:26.293015 1707 log.go:172] (0xc00057cb00) Data frame received for 5\nI0429 00:13:26.293064 1707 log.go:172] (0xc000514000) (5) Data frame handling\nI0429 00:13:26.293089 1707 log.go:172] (0xc000514000) (5) Data frame sent\nI0429 00:13:26.293294 1707 log.go:172] (0xc00057cb00) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 00:13:26.293431 1707 log.go:172] (0xc000514000) (5) Data frame handling\nI0429 00:13:26.293484 1707 log.go:172] (0xc0007c12c0) (3) Data frame sent\nI0429 00:13:26.293516 1707 log.go:172] (0xc00057cb00) Data frame received for 3\nI0429 00:13:26.293535 1707 log.go:172] (0xc0007c12c0) (3) Data frame handling\nI0429 00:13:26.295105 1707 log.go:172] (0xc00057cb00) Data frame received for 1\nI0429 00:13:26.295147 1707 log.go:172] (0xc000570140) (1) Data frame handling\nI0429 00:13:26.295184 1707 log.go:172] (0xc000570140) (1) Data frame sent\nI0429 00:13:26.295297 1707 log.go:172] (0xc00057cb00) (0xc000570140) Stream removed, broadcasting: 1\nI0429 00:13:26.295356 1707 log.go:172] (0xc00057cb00) Go away received\nI0429 00:13:26.295871 1707 log.go:172] (0xc00057cb00) (0xc000570140) Stream removed, broadcasting: 1\nI0429 00:13:26.295894 1707 log.go:172] (0xc00057cb00) (0xc0007c12c0) Stream removed, broadcasting: 3\nI0429 00:13:26.295906 1707 log.go:172] (0xc00057cb00) (0xc000514000) Stream removed, broadcasting: 5\n" Apr 29 00:13:26.301: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:13:26.301: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 00:13:26.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8651 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:13:26.495: INFO: stderr: "I0429 00:13:26.432432 1729 log.go:172] (0xc000a2aa50) (0xc0009c21e0) Create stream\nI0429 00:13:26.432508 1729 log.go:172] (0xc000a2aa50) (0xc0009c21e0) Stream added, broadcasting: 1\nI0429 00:13:26.435524 1729 log.go:172] (0xc000a2aa50) Reply frame received for 1\nI0429 00:13:26.435579 1729 log.go:172] (0xc000a2aa50) (0xc00068d220) Create stream\nI0429 00:13:26.435598 1729 log.go:172] (0xc000a2aa50) (0xc00068d220) Stream added, broadcasting: 3\nI0429 00:13:26.436770 1729 log.go:172] (0xc000a2aa50) Reply frame received for 3\nI0429 00:13:26.436834 1729 log.go:172] (0xc000a2aa50) (0xc00068d400) Create stream\nI0429 00:13:26.436855 1729 log.go:172] (0xc000a2aa50) (0xc00068d400) Stream added, broadcasting: 5\nI0429 00:13:26.438239 1729 log.go:172] (0xc000a2aa50) Reply frame received for 5\nI0429 00:13:26.487354 1729 log.go:172] (0xc000a2aa50) Data frame received for 5\nI0429 00:13:26.487410 1729 log.go:172] (0xc000a2aa50) Data frame received for 3\nI0429 00:13:26.487440 1729 log.go:172] (0xc00068d220) (3) Data frame handling\nI0429 00:13:26.487449 1729 log.go:172] (0xc00068d220) (3) Data frame sent\nI0429 00:13:26.487456 1729 log.go:172] (0xc000a2aa50) Data frame received for 3\nI0429 00:13:26.487463 1729 log.go:172] (0xc00068d220) (3) Data frame handling\nI0429 00:13:26.487492 1729 log.go:172] (0xc00068d400) (5) Data frame handling\nI0429 00:13:26.487526 1729 log.go:172] (0xc00068d400) (5) Data frame sent\nI0429 00:13:26.487541 1729 log.go:172] (0xc000a2aa50) Data frame received for 5\nI0429 00:13:26.487552 1729 log.go:172] (0xc00068d400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0429 00:13:26.489421 1729 log.go:172] (0xc000a2aa50) Data frame received for 1\nI0429 00:13:26.489457 1729 log.go:172] (0xc0009c21e0) (1) Data frame handling\nI0429 00:13:26.489487 1729 log.go:172] (0xc0009c21e0) (1) Data frame sent\nI0429 00:13:26.489510 1729 log.go:172] (0xc000a2aa50) (0xc0009c21e0) Stream removed, broadcasting: 1\nI0429 00:13:26.489533 1729 log.go:172] (0xc000a2aa50) Go away received\nI0429 00:13:26.490034 1729 log.go:172] (0xc000a2aa50) (0xc0009c21e0) Stream removed, broadcasting: 1\nI0429 00:13:26.490074 1729 log.go:172] (0xc000a2aa50) (0xc00068d220) Stream removed, broadcasting: 3\nI0429 00:13:26.490100 1729 log.go:172] (0xc000a2aa50) (0xc00068d400) Stream removed, broadcasting: 5\n" Apr 29 00:13:26.496: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:13:26.496: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 00:13:26.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8651 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:13:26.715: INFO: stderr: "I0429 00:13:26.629622 1752 log.go:172] (0xc0009b1130) (0xc0008f6780) Create stream\nI0429 00:13:26.629670 1752 log.go:172] (0xc0009b1130) (0xc0008f6780) Stream added, broadcasting: 1\nI0429 00:13:26.634402 1752 log.go:172] (0xc0009b1130) Reply frame received for 1\nI0429 00:13:26.634453 1752 log.go:172] (0xc0009b1130) (0xc0005b4a00) Create stream\nI0429 00:13:26.634467 1752 log.go:172] (0xc0009b1130) (0xc0005b4a00) Stream added, broadcasting: 3\nI0429 00:13:26.635826 1752 log.go:172] (0xc0009b1130) Reply frame received for 3\nI0429 00:13:26.635871 1752 log.go:172] (0xc0009b1130) (0xc00083d220) Create stream\nI0429 00:13:26.635888 1752 log.go:172] (0xc0009b1130) (0xc00083d220) Stream added, broadcasting: 5\nI0429 00:13:26.636912 1752 log.go:172] (0xc0009b1130) Reply frame received for 5\nI0429 00:13:26.708058 1752 log.go:172] (0xc0009b1130) Data frame received for 3\nI0429 00:13:26.708094 1752 log.go:172] (0xc0005b4a00) (3) Data frame handling\nI0429 00:13:26.708105 1752 log.go:172] (0xc0005b4a00) (3) Data frame sent\nI0429 00:13:26.708114 1752 log.go:172] (0xc0009b1130) Data frame received for 3\nI0429 00:13:26.708153 1752 log.go:172] (0xc0009b1130) Data frame received for 5\nI0429 00:13:26.708206 1752 log.go:172] (0xc00083d220) (5) Data frame handling\nI0429 00:13:26.708222 1752 log.go:172] (0xc00083d220) (5) Data frame sent\nI0429 00:13:26.708237 1752 log.go:172] (0xc0009b1130) Data frame received for 5\nI0429 00:13:26.708248 1752 log.go:172] (0xc00083d220) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0429 00:13:26.708282 1752 log.go:172] (0xc0005b4a00) (3) Data frame handling\nI0429 00:13:26.710124 1752 log.go:172] (0xc0009b1130) Data frame received for 1\nI0429 00:13:26.710159 1752 log.go:172] (0xc0008f6780) (1) Data frame handling\nI0429 00:13:26.710180 1752 log.go:172] (0xc0008f6780) (1) Data frame sent\nI0429 00:13:26.710206 1752 log.go:172] (0xc0009b1130) (0xc0008f6780) Stream removed, broadcasting: 1\nI0429 00:13:26.710230 1752 log.go:172] (0xc0009b1130) Go away received\nI0429 00:13:26.710596 1752 log.go:172] (0xc0009b1130) (0xc0008f6780) Stream removed, broadcasting: 1\nI0429 00:13:26.710623 1752 log.go:172] (0xc0009b1130) (0xc0005b4a00) Stream removed, broadcasting: 3\nI0429 00:13:26.710634 1752 log.go:172] (0xc0009b1130) (0xc00083d220) Stream removed, broadcasting: 5\n" Apr 29 00:13:26.715: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:13:26.715: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 00:13:26.726: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 29 00:13:36.771: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:13:36.771: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:13:36.771: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 29 00:13:36.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8651 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:13:37.011: INFO: stderr: "I0429 00:13:36.907017 1774 log.go:172] (0xc0000e8e70) (0xc0005b60a0) Create stream\nI0429 00:13:36.907077 1774 log.go:172] (0xc0000e8e70) (0xc0005b60a0) Stream added, broadcasting: 1\nI0429 00:13:36.910039 1774 log.go:172] (0xc0000e8e70) Reply frame received for 1\nI0429 00:13:36.910084 1774 log.go:172] (0xc0000e8e70) (0xc000780000) Create stream\nI0429 00:13:36.910097 1774 log.go:172] (0xc0000e8e70) (0xc000780000) Stream added, broadcasting: 3\nI0429 00:13:36.911260 1774 log.go:172] (0xc0000e8e70) Reply frame received for 3\nI0429 00:13:36.911303 1774 log.go:172] (0xc0000e8e70) (0xc000780140) Create stream\nI0429 00:13:36.911326 1774 log.go:172] (0xc0000e8e70) (0xc000780140) Stream added, broadcasting: 5\nI0429 00:13:36.912586 1774 log.go:172] (0xc0000e8e70) Reply frame received for 5\nI0429 00:13:37.003300 1774 log.go:172] (0xc0000e8e70) Data frame received for 3\nI0429 00:13:37.003351 1774 log.go:172] (0xc000780000) (3) Data frame handling\nI0429 00:13:37.003375 1774 log.go:172] (0xc000780000) (3) Data frame sent\nI0429 00:13:37.003396 1774 log.go:172] (0xc0000e8e70) Data frame received for 5\nI0429 00:13:37.003410 1774 log.go:172] (0xc000780140) (5) Data frame handling\nI0429 00:13:37.003427 1774 log.go:172] (0xc000780140) (5) Data frame sent\nI0429 00:13:37.003441 1774 log.go:172] (0xc0000e8e70) Data frame received for 5\nI0429 00:13:37.003454 1774 log.go:172] (0xc000780140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:13:37.003955 1774 log.go:172] (0xc0000e8e70) Data frame received for 3\nI0429 00:13:37.003982 1774 log.go:172] (0xc000780000) (3) Data frame handling\nI0429 00:13:37.006064 1774 log.go:172] (0xc0000e8e70) Data frame received for 1\nI0429 00:13:37.006085 1774 log.go:172] (0xc0005b60a0) (1) Data frame handling\nI0429 00:13:37.006114 1774 log.go:172] (0xc0005b60a0) (1) Data frame sent\nI0429 00:13:37.006131 1774 log.go:172] (0xc0000e8e70) (0xc0005b60a0) Stream removed, broadcasting: 1\nI0429 00:13:37.006152 1774 log.go:172] (0xc0000e8e70) Go away received\nI0429 00:13:37.006601 1774 log.go:172] (0xc0000e8e70) (0xc0005b60a0) Stream removed, broadcasting: 1\nI0429 00:13:37.006624 1774 log.go:172] (0xc0000e8e70) (0xc000780000) Stream removed, broadcasting: 3\nI0429 00:13:37.006642 1774 log.go:172] (0xc0000e8e70) (0xc000780140) Stream removed, broadcasting: 5\n" Apr 29 00:13:37.011: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:13:37.011: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:13:37.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8651 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:13:37.266: INFO: stderr: "I0429 00:13:37.132094 1797 log.go:172] (0xc00003be40) (0xc0009ae000) Create stream\nI0429 00:13:37.132148 1797 log.go:172] (0xc00003be40) (0xc0009ae000) Stream added, broadcasting: 1\nI0429 00:13:37.134577 1797 log.go:172] (0xc00003be40) Reply frame received for 1\nI0429 00:13:37.134628 1797 log.go:172] (0xc00003be40) (0xc000894000) Create stream\nI0429 00:13:37.134642 1797 log.go:172] (0xc00003be40) (0xc000894000) Stream added, broadcasting: 3\nI0429 00:13:37.135622 1797 log.go:172] (0xc00003be40) Reply frame received for 3\nI0429 00:13:37.135660 1797 log.go:172] (0xc00003be40) (0xc000689220) Create stream\nI0429 00:13:37.135681 1797 log.go:172] (0xc00003be40) (0xc000689220) Stream added, broadcasting: 5\nI0429 00:13:37.136677 1797 log.go:172] (0xc00003be40) Reply frame received for 5\nI0429 00:13:37.212843 1797 log.go:172] (0xc00003be40) Data frame received for 5\nI0429 00:13:37.212870 1797 log.go:172] (0xc000689220) (5) Data frame handling\nI0429 00:13:37.212883 1797 log.go:172] (0xc000689220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:13:37.260117 1797 log.go:172] (0xc00003be40) Data frame received for 3\nI0429 00:13:37.260154 1797 log.go:172] (0xc000894000) (3) Data frame handling\nI0429 00:13:37.260213 1797 log.go:172] (0xc00003be40) Data frame received for 5\nI0429 00:13:37.260257 1797 log.go:172] (0xc000689220) (5) Data frame handling\nI0429 00:13:37.260303 1797 log.go:172] (0xc000894000) (3) Data frame sent\nI0429 00:13:37.260323 1797 log.go:172] (0xc00003be40) Data frame received for 3\nI0429 00:13:37.260332 1797 log.go:172] (0xc000894000) (3) Data frame handling\nI0429 00:13:37.262217 1797 log.go:172] (0xc00003be40) Data frame received for 1\nI0429 00:13:37.262260 1797 log.go:172] (0xc0009ae000) (1) Data frame handling\nI0429 00:13:37.262305 1797 log.go:172] (0xc0009ae000) (1) Data frame sent\nI0429 00:13:37.262350 1797 log.go:172] (0xc00003be40) (0xc0009ae000) Stream removed, broadcasting: 1\nI0429 00:13:37.262386 1797 log.go:172] (0xc00003be40) Go away received\nI0429 00:13:37.262771 1797 log.go:172] (0xc00003be40) (0xc0009ae000) Stream removed, broadcasting: 1\nI0429 00:13:37.262795 1797 log.go:172] (0xc00003be40) (0xc000894000) Stream removed, broadcasting: 3\nI0429 00:13:37.262809 1797 log.go:172] (0xc00003be40) (0xc000689220) Stream removed, broadcasting: 5\n" Apr 29 00:13:37.266: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:13:37.266: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:13:37.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8651 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:13:37.531: INFO: stderr: "I0429 00:13:37.411041 1817 log.go:172] (0xc00093c6e0) (0xc0009241e0) Create stream\nI0429 00:13:37.411119 1817 log.go:172] (0xc00093c6e0) (0xc0009241e0) Stream added, broadcasting: 1\nI0429 00:13:37.413442 1817 log.go:172] (0xc00093c6e0) Reply frame received for 1\nI0429 00:13:37.413486 1817 log.go:172] (0xc00093c6e0) (0xc000410dc0) Create stream\nI0429 00:13:37.413498 1817 log.go:172] (0xc00093c6e0) (0xc000410dc0) Stream added, broadcasting: 3\nI0429 00:13:37.414258 1817 log.go:172] (0xc00093c6e0) Reply frame received for 3\nI0429 00:13:37.414290 1817 log.go:172] (0xc00093c6e0) (0xc000924320) Create stream\nI0429 00:13:37.414304 1817 log.go:172] (0xc00093c6e0) (0xc000924320) Stream added, broadcasting: 5\nI0429 00:13:37.415051 1817 log.go:172] (0xc00093c6e0) Reply frame received for 5\nI0429 00:13:37.477376 1817 log.go:172] (0xc00093c6e0) Data frame received for 5\nI0429 00:13:37.477409 1817 log.go:172] (0xc000924320) (5) Data frame handling\nI0429 00:13:37.477428 1817 log.go:172] (0xc000924320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:13:37.523657 1817 log.go:172] (0xc00093c6e0) Data frame received for 3\nI0429 00:13:37.523694 1817 log.go:172] (0xc000410dc0) (3) Data frame handling\nI0429 00:13:37.523731 1817 log.go:172] (0xc000410dc0) (3) Data frame sent\nI0429 00:13:37.523773 1817 log.go:172] (0xc00093c6e0) Data frame received for 5\nI0429 00:13:37.523800 1817 log.go:172] (0xc000924320) (5) Data frame handling\nI0429 00:13:37.523841 1817 log.go:172] (0xc00093c6e0) Data frame received for 3\nI0429 00:13:37.523867 1817 log.go:172] (0xc000410dc0) (3) Data frame handling\nI0429 00:13:37.525838 1817 log.go:172] (0xc00093c6e0) Data frame received for 1\nI0429 00:13:37.525859 1817 log.go:172] (0xc0009241e0) (1) Data frame handling\nI0429 00:13:37.525884 1817 log.go:172] (0xc0009241e0) (1) Data frame sent\nI0429 00:13:37.525905 1817 log.go:172] (0xc00093c6e0) (0xc0009241e0) Stream removed, broadcasting: 1\nI0429 00:13:37.525991 1817 log.go:172] (0xc00093c6e0) Go away received\nI0429 00:13:37.526223 1817 log.go:172] (0xc00093c6e0) (0xc0009241e0) Stream removed, broadcasting: 1\nI0429 00:13:37.526242 1817 log.go:172] (0xc00093c6e0) (0xc000410dc0) Stream removed, broadcasting: 3\nI0429 00:13:37.526257 1817 log.go:172] (0xc00093c6e0) (0xc000924320) Stream removed, broadcasting: 5\n" Apr 29 00:13:37.531: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:13:37.531: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:13:37.531: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 00:13:37.567: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 29 00:13:47.573: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 00:13:47.574: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 29 00:13:47.574: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 29 00:13:47.710: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 00:13:47.710: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC }] Apr 29 00:13:47.710: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC }] Apr 29 00:13:47.710: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC }] Apr 29 00:13:47.711: INFO: Apr 29 00:13:47.711: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 00:13:48.766: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 00:13:48.766: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC }] Apr 29 00:13:48.766: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC }] Apr 29 00:13:48.766: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC }] Apr 29 00:13:48.766: INFO: Apr 29 00:13:48.766: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 00:13:49.770: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 00:13:49.770: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC }] Apr 29 00:13:49.770: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC }] Apr 29 00:13:49.770: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC }] Apr 29 00:13:49.770: INFO: Apr 29 00:13:49.770: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 29 00:13:50.796: INFO: POD NODE PHASE GRACE CONDITIONS Apr 29 00:13:50.796: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:12:55 +0000 UTC }] Apr 29 00:13:50.796: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 00:13:15 +0000 UTC }] Apr 29 00:13:50.796: INFO: Apr 29 00:13:50.796: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 29 00:13:51.800: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.903401342s Apr 29 00:13:52.806: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.899924614s Apr 29 00:13:53.816: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.893207228s Apr 29 00:13:54.819: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.883537914s Apr 29 00:13:55.830: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.880206939s Apr 29 00:13:56.833: INFO: Verifying statefulset ss doesn't scale past 0 for another 869.673857ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8651 Apr 29 00:13:57.838: INFO: Scaling statefulset ss to 0 Apr 29 00:13:57.867: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 29 00:13:57.869: INFO: Deleting all statefulset in ns statefulset-8651 Apr 29 00:13:57.872: INFO: Scaling statefulset ss to 0 Apr 29 00:13:57.878: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 00:13:57.881: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:13:57.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8651" for this suite. • [SLOW TEST:62.713 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":149,"skipped":2523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:13:57.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-1534 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1534 to expose endpoints map[] Apr 29 00:13:58.062: INFO: Get endpoints failed (30.491392ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 29 00:13:59.065: INFO: successfully validated that service endpoint-test2 in namespace services-1534 exposes endpoints map[] (1.034035262s elapsed) STEP: Creating pod pod1 in namespace services-1534 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1534 to expose endpoints map[pod1:[80]] Apr 29 00:14:02.129: INFO: successfully validated that service endpoint-test2 in namespace services-1534 exposes endpoints map[pod1:[80]] (3.058089832s elapsed) STEP: Creating pod pod2 in namespace services-1534 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1534 to expose endpoints map[pod1:[80] pod2:[80]] Apr 29 00:14:05.278: INFO: successfully validated that service endpoint-test2 in namespace services-1534 exposes endpoints map[pod1:[80] pod2:[80]] (3.144544022s elapsed) STEP: Deleting pod pod1 in namespace services-1534 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1534 to expose endpoints map[pod2:[80]] Apr 29 00:14:06.355: INFO: successfully validated that service endpoint-test2 in namespace services-1534 exposes endpoints map[pod2:[80]] (1.073771078s elapsed) STEP: Deleting pod pod2 in namespace services-1534 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1534 to expose endpoints map[] Apr 29 00:14:07.395: INFO: successfully validated that service endpoint-test2 in namespace services-1534 exposes endpoints map[] (1.035431204s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:14:07.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1534" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.746 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":150,"skipped":2550,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:14:07.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 00:14:11.779: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:14:11.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3402" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2563,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:14:11.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8703 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 29 00:14:11.920: INFO: Found 0 stateful pods, waiting for 3 Apr 29 00:14:21.925: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:14:21.925: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:14:21.925: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 29 00:14:31.925: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:14:31.925: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:14:31.925: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:14:31.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8703 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:14:32.220: INFO: stderr: "I0429 00:14:32.068299 1839 log.go:172] (0xc0000e1080) (0xc0005c3540) Create stream\nI0429 00:14:32.068357 1839 log.go:172] (0xc0000e1080) (0xc0005c3540) Stream added, broadcasting: 1\nI0429 00:14:32.071107 1839 log.go:172] (0xc0000e1080) Reply frame received for 1\nI0429 00:14:32.071165 1839 log.go:172] (0xc0000e1080) (0xc000136960) Create stream\nI0429 00:14:32.071190 1839 log.go:172] (0xc0000e1080) (0xc000136960) Stream added, broadcasting: 3\nI0429 00:14:32.072182 1839 log.go:172] (0xc0000e1080) Reply frame received for 3\nI0429 00:14:32.072207 1839 log.go:172] (0xc0000e1080) (0xc000136a00) Create stream\nI0429 00:14:32.072220 1839 log.go:172] (0xc0000e1080) (0xc000136a00) Stream added, broadcasting: 5\nI0429 00:14:32.073295 1839 log.go:172] (0xc0000e1080) Reply frame received for 5\nI0429 00:14:32.172130 1839 log.go:172] (0xc0000e1080) Data frame received for 5\nI0429 00:14:32.172159 1839 log.go:172] (0xc000136a00) (5) Data frame handling\nI0429 00:14:32.172175 1839 log.go:172] (0xc000136a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:14:32.212523 1839 log.go:172] (0xc0000e1080) Data frame received for 3\nI0429 00:14:32.212559 1839 log.go:172] (0xc000136960) (3) Data frame handling\nI0429 00:14:32.212582 1839 log.go:172] (0xc000136960) (3) Data frame sent\nI0429 00:14:32.212781 1839 log.go:172] (0xc0000e1080) Data frame received for 3\nI0429 00:14:32.212800 1839 log.go:172] (0xc000136960) (3) Data frame handling\nI0429 00:14:32.212921 1839 log.go:172] (0xc0000e1080) Data frame received for 5\nI0429 00:14:32.212944 1839 log.go:172] (0xc000136a00) (5) Data frame handling\nI0429 00:14:32.214596 1839 log.go:172] (0xc0000e1080) Data frame received for 1\nI0429 00:14:32.214659 1839 log.go:172] (0xc0005c3540) (1) Data frame handling\nI0429 00:14:32.214677 1839 log.go:172] (0xc0005c3540) (1) Data frame sent\nI0429 00:14:32.214696 1839 log.go:172] (0xc0000e1080) (0xc0005c3540) Stream removed, broadcasting: 1\nI0429 00:14:32.214727 1839 log.go:172] (0xc0000e1080) Go away received\nI0429 00:14:32.215116 1839 log.go:172] (0xc0000e1080) (0xc0005c3540) Stream removed, broadcasting: 1\nI0429 00:14:32.215136 1839 log.go:172] (0xc0000e1080) (0xc000136960) Stream removed, broadcasting: 3\nI0429 00:14:32.215148 1839 log.go:172] (0xc0000e1080) (0xc000136a00) Stream removed, broadcasting: 5\n" Apr 29 00:14:32.220: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:14:32.220: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 29 00:14:42.252: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 29 00:14:52.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8703 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:14:52.521: INFO: stderr: "I0429 00:14:52.413952 1859 log.go:172] (0xc000a8d340) (0xc0009e6500) Create stream\nI0429 00:14:52.414007 1859 log.go:172] (0xc000a8d340) (0xc0009e6500) Stream added, broadcasting: 1\nI0429 00:14:52.416403 1859 log.go:172] (0xc000a8d340) Reply frame received for 1\nI0429 00:14:52.416446 1859 log.go:172] (0xc000a8d340) (0xc000ac4500) Create stream\nI0429 00:14:52.416984 1859 log.go:172] (0xc000a8d340) (0xc000ac4500) Stream added, broadcasting: 3\nI0429 00:14:52.418855 1859 log.go:172] (0xc000a8d340) Reply frame received for 3\nI0429 00:14:52.418893 1859 log.go:172] (0xc000a8d340) (0xc0009ce640) Create stream\nI0429 00:14:52.418914 1859 log.go:172] (0xc000a8d340) (0xc0009ce640) Stream added, broadcasting: 5\nI0429 00:14:52.421546 1859 log.go:172] (0xc000a8d340) Reply frame received for 5\nI0429 00:14:52.515951 1859 log.go:172] (0xc000a8d340) Data frame received for 3\nI0429 00:14:52.515992 1859 log.go:172] (0xc000ac4500) (3) Data frame handling\nI0429 00:14:52.516008 1859 log.go:172] (0xc000ac4500) (3) Data frame sent\nI0429 00:14:52.516018 1859 log.go:172] (0xc000a8d340) Data frame received for 3\nI0429 00:14:52.516023 1859 log.go:172] (0xc000ac4500) (3) Data frame handling\nI0429 00:14:52.516049 1859 log.go:172] (0xc000a8d340) Data frame received for 5\nI0429 00:14:52.516057 1859 log.go:172] (0xc0009ce640) (5) Data frame handling\nI0429 00:14:52.516066 1859 log.go:172] (0xc0009ce640) (5) Data frame sent\nI0429 00:14:52.516072 1859 log.go:172] (0xc000a8d340) Data frame received for 5\nI0429 00:14:52.516077 1859 log.go:172] (0xc0009ce640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 00:14:52.517547 1859 log.go:172] (0xc000a8d340) Data frame received for 1\nI0429 00:14:52.517574 1859 log.go:172] (0xc0009e6500) (1) Data frame handling\nI0429 00:14:52.517591 1859 log.go:172] (0xc0009e6500) (1) Data frame sent\nI0429 00:14:52.517607 1859 log.go:172] (0xc000a8d340) (0xc0009e6500) Stream removed, broadcasting: 1\nI0429 00:14:52.517620 1859 log.go:172] (0xc000a8d340) Go away received\nI0429 00:14:52.518047 1859 log.go:172] (0xc000a8d340) (0xc0009e6500) Stream removed, broadcasting: 1\nI0429 00:14:52.518067 1859 log.go:172] (0xc000a8d340) (0xc000ac4500) Stream removed, broadcasting: 3\nI0429 00:14:52.518076 1859 log.go:172] (0xc000a8d340) (0xc0009ce640) Stream removed, broadcasting: 5\n" Apr 29 00:14:52.521: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:14:52.521: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 00:15:12.543: INFO: Waiting for StatefulSet statefulset-8703/ss2 to complete update Apr 29 00:15:12.543: INFO: Waiting for Pod statefulset-8703/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Apr 29 00:15:22.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8703 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:15:22.826: INFO: stderr: "I0429 00:15:22.683788 1882 log.go:172] (0xc0009946e0) (0xc0008e0000) Create stream\nI0429 00:15:22.683855 1882 log.go:172] (0xc0009946e0) (0xc0008e0000) Stream added, broadcasting: 1\nI0429 00:15:22.686231 1882 log.go:172] (0xc0009946e0) Reply frame received for 1\nI0429 00:15:22.686267 1882 log.go:172] (0xc0009946e0) (0xc0007fb2c0) Create stream\nI0429 00:15:22.686278 1882 log.go:172] (0xc0009946e0) (0xc0007fb2c0) Stream added, broadcasting: 3\nI0429 00:15:22.687322 1882 log.go:172] (0xc0009946e0) Reply frame received for 3\nI0429 00:15:22.687382 1882 log.go:172] (0xc0009946e0) (0xc00089e0a0) Create stream\nI0429 00:15:22.687402 1882 log.go:172] (0xc0009946e0) (0xc00089e0a0) Stream added, broadcasting: 5\nI0429 00:15:22.688312 1882 log.go:172] (0xc0009946e0) Reply frame received for 5\nI0429 00:15:22.767708 1882 log.go:172] (0xc0009946e0) Data frame received for 5\nI0429 00:15:22.767735 1882 log.go:172] (0xc00089e0a0) (5) Data frame handling\nI0429 00:15:22.767752 1882 log.go:172] (0xc00089e0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:15:22.819408 1882 log.go:172] (0xc0009946e0) Data frame received for 3\nI0429 00:15:22.819437 1882 log.go:172] (0xc0007fb2c0) (3) Data frame handling\nI0429 00:15:22.819455 1882 log.go:172] (0xc0009946e0) Data frame received for 5\nI0429 00:15:22.819505 1882 log.go:172] (0xc00089e0a0) (5) Data frame handling\nI0429 00:15:22.819537 1882 log.go:172] (0xc0007fb2c0) (3) Data frame sent\nI0429 00:15:22.819556 1882 log.go:172] (0xc0009946e0) Data frame received for 3\nI0429 00:15:22.819581 1882 log.go:172] (0xc0007fb2c0) (3) Data frame handling\nI0429 00:15:22.821761 1882 log.go:172] (0xc0009946e0) Data frame received for 1\nI0429 00:15:22.821781 1882 log.go:172] (0xc0008e0000) (1) Data frame handling\nI0429 00:15:22.821794 1882 log.go:172] (0xc0008e0000) (1) Data frame sent\nI0429 00:15:22.821811 1882 log.go:172] (0xc0009946e0) (0xc0008e0000) Stream removed, broadcasting: 1\nI0429 00:15:22.821827 1882 log.go:172] (0xc0009946e0) Go away received\nI0429 00:15:22.822272 1882 log.go:172] (0xc0009946e0) (0xc0008e0000) Stream removed, broadcasting: 1\nI0429 00:15:22.822304 1882 log.go:172] (0xc0009946e0) (0xc0007fb2c0) Stream removed, broadcasting: 3\nI0429 00:15:22.822319 1882 log.go:172] (0xc0009946e0) (0xc00089e0a0) Stream removed, broadcasting: 5\n" Apr 29 00:15:22.826: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:15:22.826: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:15:32.875: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 29 00:15:42.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8703 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:15:43.189: INFO: stderr: "I0429 00:15:43.090238 1902 log.go:172] (0xc0000e8630) (0xc0007e1360) Create stream\nI0429 00:15:43.090314 1902 log.go:172] (0xc0000e8630) (0xc0007e1360) Stream added, broadcasting: 1\nI0429 00:15:43.093318 1902 log.go:172] (0xc0000e8630) Reply frame received for 1\nI0429 00:15:43.093354 1902 log.go:172] (0xc0000e8630) (0xc0007e1400) Create stream\nI0429 00:15:43.093364 1902 log.go:172] (0xc0000e8630) (0xc0007e1400) Stream added, broadcasting: 3\nI0429 00:15:43.094608 1902 log.go:172] (0xc0000e8630) Reply frame received for 3\nI0429 00:15:43.094669 1902 log.go:172] (0xc0000e8630) (0xc00054ebe0) Create stream\nI0429 00:15:43.094684 1902 log.go:172] (0xc0000e8630) (0xc00054ebe0) Stream added, broadcasting: 5\nI0429 00:15:43.095753 1902 log.go:172] (0xc0000e8630) Reply frame received for 5\nI0429 00:15:43.181501 1902 log.go:172] (0xc0000e8630) Data frame received for 5\nI0429 00:15:43.181537 1902 log.go:172] (0xc00054ebe0) (5) Data frame handling\nI0429 00:15:43.181550 1902 log.go:172] (0xc00054ebe0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 00:15:43.181565 1902 log.go:172] (0xc0000e8630) Data frame received for 3\nI0429 00:15:43.181573 1902 log.go:172] (0xc0007e1400) (3) Data frame handling\nI0429 00:15:43.181580 1902 log.go:172] (0xc0007e1400) (3) Data frame sent\nI0429 00:15:43.181588 1902 log.go:172] (0xc0000e8630) Data frame received for 3\nI0429 00:15:43.181596 1902 log.go:172] (0xc0007e1400) (3) Data frame handling\nI0429 00:15:43.181901 1902 log.go:172] (0xc0000e8630) Data frame received for 5\nI0429 00:15:43.181927 1902 log.go:172] (0xc00054ebe0) (5) Data frame handling\nI0429 00:15:43.183534 1902 log.go:172] (0xc0000e8630) Data frame received for 1\nI0429 00:15:43.183578 1902 log.go:172] (0xc0007e1360) (1) Data frame handling\nI0429 00:15:43.183607 1902 log.go:172] (0xc0007e1360) (1) Data frame sent\nI0429 00:15:43.183630 1902 log.go:172] (0xc0000e8630) (0xc0007e1360) Stream removed, broadcasting: 1\nI0429 00:15:43.183721 1902 log.go:172] (0xc0000e8630) Go away received\nI0429 00:15:43.183960 1902 log.go:172] (0xc0000e8630) (0xc0007e1360) Stream removed, broadcasting: 1\nI0429 00:15:43.183984 1902 log.go:172] (0xc0000e8630) (0xc0007e1400) Stream removed, broadcasting: 3\nI0429 00:15:43.184001 1902 log.go:172] (0xc0000e8630) (0xc00054ebe0) Stream removed, broadcasting: 5\n" Apr 29 00:15:43.189: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:15:43.189: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 29 00:16:03.230: INFO: Deleting all statefulset in ns statefulset-8703 Apr 29 00:16:03.232: INFO: Scaling statefulset ss2 to 0 Apr 29 00:16:23.255: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 00:16:23.258: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:16:23.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8703" for this suite. • [SLOW TEST:131.435 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":152,"skipped":2572,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:16:23.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-6063ab90-ea98-478c-acbe-ac0c3369b689 STEP: Creating a pod to test consume secrets Apr 29 00:16:23.402: INFO: Waiting up to 5m0s for pod "pod-secrets-f3ad41fa-538a-4a1b-9f73-34cb42a5d9b1" in namespace "secrets-4483" to be "Succeeded or Failed" Apr 29 00:16:23.406: INFO: Pod "pod-secrets-f3ad41fa-538a-4a1b-9f73-34cb42a5d9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.866825ms Apr 29 00:16:25.410: INFO: Pod "pod-secrets-f3ad41fa-538a-4a1b-9f73-34cb42a5d9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008300718s Apr 29 00:16:27.414: INFO: Pod "pod-secrets-f3ad41fa-538a-4a1b-9f73-34cb42a5d9b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012167411s STEP: Saw pod success Apr 29 00:16:27.414: INFO: Pod "pod-secrets-f3ad41fa-538a-4a1b-9f73-34cb42a5d9b1" satisfied condition "Succeeded or Failed" Apr 29 00:16:27.417: INFO: Trying to get logs from node latest-worker pod pod-secrets-f3ad41fa-538a-4a1b-9f73-34cb42a5d9b1 container secret-env-test: STEP: delete the pod Apr 29 00:16:27.449: INFO: Waiting for pod pod-secrets-f3ad41fa-538a-4a1b-9f73-34cb42a5d9b1 to disappear Apr 29 00:16:27.454: INFO: Pod pod-secrets-f3ad41fa-538a-4a1b-9f73-34cb42a5d9b1 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:16:27.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4483" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:16:27.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:16:27.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7a3a471-7dc8-4fc9-a076-7d93394ee347" in namespace "downward-api-4408" to be "Succeeded or Failed" Apr 29 00:16:27.544: INFO: Pod "downwardapi-volume-f7a3a471-7dc8-4fc9-a076-7d93394ee347": Phase="Pending", Reason="", readiness=false. Elapsed: 3.809807ms Apr 29 00:16:29.642: INFO: Pod "downwardapi-volume-f7a3a471-7dc8-4fc9-a076-7d93394ee347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101768988s Apr 29 00:16:31.646: INFO: Pod "downwardapi-volume-f7a3a471-7dc8-4fc9-a076-7d93394ee347": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105990931s STEP: Saw pod success Apr 29 00:16:31.646: INFO: Pod "downwardapi-volume-f7a3a471-7dc8-4fc9-a076-7d93394ee347" satisfied condition "Succeeded or Failed" Apr 29 00:16:31.649: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f7a3a471-7dc8-4fc9-a076-7d93394ee347 container client-container: STEP: delete the pod Apr 29 00:16:31.690: INFO: Waiting for pod downwardapi-volume-f7a3a471-7dc8-4fc9-a076-7d93394ee347 to disappear Apr 29 00:16:31.725: INFO: Pod downwardapi-volume-f7a3a471-7dc8-4fc9-a076-7d93394ee347 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:16:31.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4408" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2623,"failed":0} ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:16:31.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-1189 STEP: creating replication controller nodeport-test in namespace services-1189 I0429 00:16:31.859086 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1189, replica count: 2 I0429 00:16:34.909496 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 00:16:37.909738 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 00:16:37.909: INFO: Creating new exec pod Apr 29 00:16:42.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1189 execpod6l5ph -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 29 00:16:43.160: INFO: stderr: "I0429 00:16:43.061727 1923 log.go:172] (0xc0000e9e40) (0xc0006f1540) Create stream\nI0429 00:16:43.061784 1923 log.go:172] (0xc0000e9e40) (0xc0006f1540) Stream added, broadcasting: 1\nI0429 00:16:43.064070 1923 log.go:172] (0xc0000e9e40) Reply frame received for 1\nI0429 00:16:43.064106 1923 log.go:172] (0xc0000e9e40) (0xc0009dc000) Create stream\nI0429 00:16:43.064115 1923 log.go:172] (0xc0000e9e40) (0xc0009dc000) Stream added, broadcasting: 3\nI0429 00:16:43.064969 1923 log.go:172] (0xc0000e9e40) Reply frame received for 3\nI0429 00:16:43.065002 1923 log.go:172] (0xc0000e9e40) (0xc0006f15e0) Create stream\nI0429 00:16:43.065015 1923 log.go:172] (0xc0000e9e40) (0xc0006f15e0) Stream added, broadcasting: 5\nI0429 00:16:43.066061 1923 log.go:172] (0xc0000e9e40) Reply frame received for 5\nI0429 00:16:43.152446 1923 log.go:172] (0xc0000e9e40) Data frame received for 5\nI0429 00:16:43.152482 1923 log.go:172] (0xc0006f15e0) (5) Data frame handling\nI0429 00:16:43.152507 1923 log.go:172] (0xc0006f15e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0429 00:16:43.152963 1923 log.go:172] (0xc0000e9e40) Data frame received for 3\nI0429 00:16:43.152987 1923 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0429 00:16:43.153688 1923 log.go:172] (0xc0000e9e40) Data frame received for 5\nI0429 00:16:43.153706 1923 log.go:172] (0xc0006f15e0) (5) Data frame handling\nI0429 00:16:43.153715 1923 log.go:172] (0xc0006f15e0) (5) Data frame sent\nI0429 00:16:43.153724 1923 log.go:172] (0xc0000e9e40) Data frame received for 5\nI0429 00:16:43.153732 1923 log.go:172] (0xc0006f15e0) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0429 00:16:43.155541 1923 log.go:172] (0xc0000e9e40) Data frame received for 1\nI0429 00:16:43.155573 1923 log.go:172] (0xc0006f1540) (1) Data frame handling\nI0429 00:16:43.155612 1923 log.go:172] (0xc0006f1540) (1) Data frame sent\nI0429 00:16:43.155641 1923 log.go:172] (0xc0000e9e40) (0xc0006f1540) Stream removed, broadcasting: 1\nI0429 00:16:43.155668 1923 log.go:172] (0xc0000e9e40) Go away received\nI0429 00:16:43.156051 1923 log.go:172] (0xc0000e9e40) (0xc0006f1540) Stream removed, broadcasting: 1\nI0429 00:16:43.156075 1923 log.go:172] (0xc0000e9e40) (0xc0009dc000) Stream removed, broadcasting: 3\nI0429 00:16:43.156088 1923 log.go:172] (0xc0000e9e40) (0xc0006f15e0) Stream removed, broadcasting: 5\n" Apr 29 00:16:43.160: INFO: stdout: "" Apr 29 00:16:43.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1189 execpod6l5ph -- /bin/sh -x -c nc -zv -t -w 2 10.96.71.24 80' Apr 29 00:16:43.360: INFO: stderr: "I0429 00:16:43.279939 1946 log.go:172] (0xc000a3f130) (0xc000a366e0) Create stream\nI0429 00:16:43.279996 1946 log.go:172] (0xc000a3f130) (0xc000a366e0) Stream added, broadcasting: 1\nI0429 00:16:43.282453 1946 log.go:172] (0xc000a3f130) Reply frame received for 1\nI0429 00:16:43.282482 1946 log.go:172] (0xc000a3f130) (0xc0009c0000) Create stream\nI0429 00:16:43.282493 1946 log.go:172] (0xc000a3f130) (0xc0009c0000) Stream added, broadcasting: 3\nI0429 00:16:43.283442 1946 log.go:172] (0xc000a3f130) Reply frame received for 3\nI0429 00:16:43.283493 1946 log.go:172] (0xc000a3f130) (0xc000a36780) Create stream\nI0429 00:16:43.283510 1946 log.go:172] (0xc000a3f130) (0xc000a36780) Stream added, broadcasting: 5\nI0429 00:16:43.284399 1946 log.go:172] (0xc000a3f130) Reply frame received for 5\nI0429 00:16:43.352423 1946 log.go:172] (0xc000a3f130) Data frame received for 3\nI0429 00:16:43.352481 1946 log.go:172] (0xc000a3f130) Data frame received for 5\nI0429 00:16:43.352523 1946 log.go:172] (0xc000a36780) (5) Data frame handling\nI0429 00:16:43.352544 1946 log.go:172] (0xc000a36780) (5) Data frame sent\nI0429 00:16:43.352564 1946 log.go:172] (0xc000a3f130) Data frame received for 5\nI0429 00:16:43.352588 1946 log.go:172] (0xc000a36780) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.71.24 80\nConnection to 10.96.71.24 80 port [tcp/http] succeeded!\nI0429 00:16:43.352620 1946 log.go:172] (0xc0009c0000) (3) Data frame handling\nI0429 00:16:43.354785 1946 log.go:172] (0xc000a3f130) Data frame received for 1\nI0429 00:16:43.354826 1946 log.go:172] (0xc000a366e0) (1) Data frame handling\nI0429 00:16:43.354862 1946 log.go:172] (0xc000a366e0) (1) Data frame sent\nI0429 00:16:43.354895 1946 log.go:172] (0xc000a3f130) (0xc000a366e0) Stream removed, broadcasting: 1\nI0429 00:16:43.354925 1946 log.go:172] (0xc000a3f130) Go away received\nI0429 00:16:43.356042 1946 log.go:172] (0xc000a3f130) (0xc000a366e0) Stream removed, broadcasting: 1\nI0429 00:16:43.356074 1946 log.go:172] (0xc000a3f130) (0xc0009c0000) Stream removed, broadcasting: 3\nI0429 00:16:43.356093 1946 log.go:172] (0xc000a3f130) (0xc000a36780) Stream removed, broadcasting: 5\n" Apr 29 00:16:43.360: INFO: stdout: "" Apr 29 00:16:43.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1189 execpod6l5ph -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31182' Apr 29 00:16:43.578: INFO: stderr: "I0429 00:16:43.505526 1963 log.go:172] (0xc0000ec370) (0xc0006e1220) Create stream\nI0429 00:16:43.505600 1963 log.go:172] (0xc0000ec370) (0xc0006e1220) Stream added, broadcasting: 1\nI0429 00:16:43.508791 1963 log.go:172] (0xc0000ec370) Reply frame received for 1\nI0429 00:16:43.508830 1963 log.go:172] (0xc0000ec370) (0xc0006e12c0) Create stream\nI0429 00:16:43.508839 1963 log.go:172] (0xc0000ec370) (0xc0006e12c0) Stream added, broadcasting: 3\nI0429 00:16:43.510157 1963 log.go:172] (0xc0000ec370) Reply frame received for 3\nI0429 00:16:43.510188 1963 log.go:172] (0xc0000ec370) (0xc0006e1360) Create stream\nI0429 00:16:43.510196 1963 log.go:172] (0xc0000ec370) (0xc0006e1360) Stream added, broadcasting: 5\nI0429 00:16:43.511226 1963 log.go:172] (0xc0000ec370) Reply frame received for 5\nI0429 00:16:43.570557 1963 log.go:172] (0xc0000ec370) Data frame received for 5\nI0429 00:16:43.570586 1963 log.go:172] (0xc0006e1360) (5) Data frame handling\nI0429 00:16:43.570598 1963 log.go:172] (0xc0006e1360) (5) Data frame sent\nI0429 00:16:43.570606 1963 log.go:172] (0xc0000ec370) Data frame received for 5\nI0429 00:16:43.570613 1963 log.go:172] (0xc0006e1360) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31182\nConnection to 172.17.0.13 31182 port [tcp/31182] succeeded!\nI0429 00:16:43.570634 1963 log.go:172] (0xc0000ec370) Data frame received for 3\nI0429 00:16:43.570645 1963 log.go:172] (0xc0006e12c0) (3) Data frame handling\nI0429 00:16:43.572538 1963 log.go:172] (0xc0000ec370) Data frame received for 1\nI0429 00:16:43.572556 1963 log.go:172] (0xc0006e1220) (1) Data frame handling\nI0429 00:16:43.572565 1963 log.go:172] (0xc0006e1220) (1) Data frame sent\nI0429 00:16:43.572677 1963 log.go:172] (0xc0000ec370) (0xc0006e1220) Stream removed, broadcasting: 1\nI0429 00:16:43.572741 1963 log.go:172] (0xc0000ec370) Go away received\nI0429 00:16:43.572956 1963 log.go:172] (0xc0000ec370) (0xc0006e1220) Stream removed, broadcasting: 1\nI0429 00:16:43.572970 1963 log.go:172] (0xc0000ec370) (0xc0006e12c0) Stream removed, broadcasting: 3\nI0429 00:16:43.572979 1963 log.go:172] (0xc0000ec370) (0xc0006e1360) Stream removed, broadcasting: 5\n" Apr 29 00:16:43.578: INFO: stdout: "" Apr 29 00:16:43.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-1189 execpod6l5ph -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31182' Apr 29 00:16:43.805: INFO: stderr: "I0429 00:16:43.715800 1983 log.go:172] (0xc0009b13f0) (0xc00095a8c0) Create stream\nI0429 00:16:43.715869 1983 log.go:172] (0xc0009b13f0) (0xc00095a8c0) Stream added, broadcasting: 1\nI0429 00:16:43.720212 1983 log.go:172] (0xc0009b13f0) Reply frame received for 1\nI0429 00:16:43.720262 1983 log.go:172] (0xc0009b13f0) (0xc00095a000) Create stream\nI0429 00:16:43.720277 1983 log.go:172] (0xc0009b13f0) (0xc00095a000) Stream added, broadcasting: 3\nI0429 00:16:43.721331 1983 log.go:172] (0xc0009b13f0) Reply frame received for 3\nI0429 00:16:43.721388 1983 log.go:172] (0xc0009b13f0) (0xc00060b4a0) Create stream\nI0429 00:16:43.721406 1983 log.go:172] (0xc0009b13f0) (0xc00060b4a0) Stream added, broadcasting: 5\nI0429 00:16:43.722344 1983 log.go:172] (0xc0009b13f0) Reply frame received for 5\nI0429 00:16:43.798837 1983 log.go:172] (0xc0009b13f0) Data frame received for 5\nI0429 00:16:43.798884 1983 log.go:172] (0xc00060b4a0) (5) Data frame handling\nI0429 00:16:43.798900 1983 log.go:172] (0xc00060b4a0) (5) Data frame sent\nI0429 00:16:43.798910 1983 log.go:172] (0xc0009b13f0) Data frame received for 5\nI0429 00:16:43.798919 1983 log.go:172] (0xc00060b4a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31182\nConnection to 172.17.0.12 31182 port [tcp/31182] succeeded!\nI0429 00:16:43.798944 1983 log.go:172] (0xc0009b13f0) Data frame received for 3\nI0429 00:16:43.798954 1983 log.go:172] (0xc00095a000) (3) Data frame handling\nI0429 00:16:43.800693 1983 log.go:172] (0xc0009b13f0) Data frame received for 1\nI0429 00:16:43.800724 1983 log.go:172] (0xc00095a8c0) (1) Data frame handling\nI0429 00:16:43.800738 1983 log.go:172] (0xc00095a8c0) (1) Data frame sent\nI0429 00:16:43.800753 1983 log.go:172] (0xc0009b13f0) (0xc00095a8c0) Stream removed, broadcasting: 1\nI0429 00:16:43.801280 1983 log.go:172] (0xc0009b13f0) (0xc00095a8c0) Stream removed, broadcasting: 1\nI0429 00:16:43.801307 1983 log.go:172] (0xc0009b13f0) (0xc00095a000) Stream removed, broadcasting: 3\nI0429 00:16:43.801320 1983 log.go:172] (0xc0009b13f0) (0xc00060b4a0) Stream removed, broadcasting: 5\n" Apr 29 00:16:43.805: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:16:43.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1189" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.081 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":155,"skipped":2623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:16:43.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-3127dc34-01e5-4f34-aed8-0222d8002829 STEP: Creating a pod to test consume secrets Apr 29 00:16:43.912: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7673fc1e-41ca-473d-960f-46eb60078dc1" in namespace "projected-2945" to be "Succeeded or Failed" Apr 29 00:16:43.928: INFO: Pod "pod-projected-secrets-7673fc1e-41ca-473d-960f-46eb60078dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.95967ms Apr 29 00:16:45.933: INFO: Pod "pod-projected-secrets-7673fc1e-41ca-473d-960f-46eb60078dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020869033s Apr 29 00:16:47.937: INFO: Pod "pod-projected-secrets-7673fc1e-41ca-473d-960f-46eb60078dc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025384262s STEP: Saw pod success Apr 29 00:16:47.937: INFO: Pod "pod-projected-secrets-7673fc1e-41ca-473d-960f-46eb60078dc1" satisfied condition "Succeeded or Failed" Apr 29 00:16:47.940: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-7673fc1e-41ca-473d-960f-46eb60078dc1 container projected-secret-volume-test: STEP: delete the pod Apr 29 00:16:47.959: INFO: Waiting for pod pod-projected-secrets-7673fc1e-41ca-473d-960f-46eb60078dc1 to disappear Apr 29 00:16:47.963: INFO: Pod pod-projected-secrets-7673fc1e-41ca-473d-960f-46eb60078dc1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:16:47.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2945" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2649,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:16:47.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 29 00:16:48.123: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9522 /api/v1/namespaces/watch-9522/configmaps/e2e-watch-test-label-changed cdcee345-b530-460e-aed4-67c40ae7e143 11851572 0 2020-04-29 00:16:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 00:16:48.123: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9522 /api/v1/namespaces/watch-9522/configmaps/e2e-watch-test-label-changed cdcee345-b530-460e-aed4-67c40ae7e143 11851573 0 2020-04-29 00:16:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 00:16:48.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9522 /api/v1/namespaces/watch-9522/configmaps/e2e-watch-test-label-changed cdcee345-b530-460e-aed4-67c40ae7e143 11851574 0 2020-04-29 00:16:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 29 00:16:58.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9522 /api/v1/namespaces/watch-9522/configmaps/e2e-watch-test-label-changed cdcee345-b530-460e-aed4-67c40ae7e143 11851644 0 2020-04-29 00:16:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 00:16:58.229: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9522 /api/v1/namespaces/watch-9522/configmaps/e2e-watch-test-label-changed cdcee345-b530-460e-aed4-67c40ae7e143 11851645 0 2020-04-29 00:16:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 29 00:16:58.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9522 /api/v1/namespaces/watch-9522/configmaps/e2e-watch-test-label-changed cdcee345-b530-460e-aed4-67c40ae7e143 11851646 0 2020-04-29 00:16:48 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:16:58.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9522" for this suite. • [SLOW TEST:10.265 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":157,"skipped":2661,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:16:58.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 29 00:16:58.328: INFO: >>> kubeConfig: /root/.kube/config Apr 29 00:17:01.208: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:17:11.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9244" for this suite. • [SLOW TEST:13.837 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":158,"skipped":2676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:17:12.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 29 00:17:12.724: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:17:12.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4296" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":159,"skipped":2730,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:17:12.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 29 00:17:17.557: INFO: Successfully updated pod "labelsupdate75992d82-ff71-45d6-8413-65b40b08e38d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:17:19.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8857" for this suite. • [SLOW TEST:6.695 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2736,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:17:19.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 29 00:17:19.660: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 29 00:17:30.157: INFO: >>> kubeConfig: /root/.kube/config Apr 29 00:17:33.053: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:17:42.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9842" for this suite. • [SLOW TEST:23.003 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":161,"skipped":2737,"failed":0} [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:17:42.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 29 00:17:46.723: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8084 PodName:pod-sharedvolume-a5901fc1-27bf-481f-988c-ec6276dda4fe ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 00:17:46.723: INFO: >>> kubeConfig: /root/.kube/config I0429 00:17:46.758044 7 log.go:172] (0xc0061a3d90) (0xc00225e000) Create stream I0429 00:17:46.758075 7 log.go:172] (0xc0061a3d90) (0xc00225e000) Stream added, broadcasting: 1 I0429 00:17:46.760261 7 log.go:172] (0xc0061a3d90) Reply frame received for 1 I0429 00:17:46.760307 7 log.go:172] (0xc0061a3d90) (0xc001a8c140) Create stream I0429 00:17:46.760334 7 log.go:172] (0xc0061a3d90) (0xc001a8c140) Stream added, broadcasting: 3 I0429 00:17:46.761445 7 log.go:172] (0xc0061a3d90) Reply frame received for 3 I0429 00:17:46.761487 7 log.go:172] (0xc0061a3d90) (0xc002178820) Create stream I0429 00:17:46.761502 7 log.go:172] (0xc0061a3d90) (0xc002178820) Stream added, broadcasting: 5 I0429 00:17:46.762523 7 log.go:172] (0xc0061a3d90) Reply frame received for 5 I0429 00:17:46.839947 7 log.go:172] (0xc0061a3d90) Data frame received for 5 I0429 00:17:46.839987 7 log.go:172] (0xc002178820) (5) Data frame handling I0429 00:17:46.840007 7 log.go:172] (0xc0061a3d90) Data frame received for 3 I0429 00:17:46.840016 7 log.go:172] (0xc001a8c140) (3) Data frame handling I0429 00:17:46.840026 7 log.go:172] (0xc001a8c140) (3) Data frame sent I0429 00:17:46.840035 7 log.go:172] (0xc0061a3d90) Data frame received for 3 I0429 00:17:46.840043 7 log.go:172] (0xc001a8c140) (3) Data frame handling I0429 00:17:46.841065 7 log.go:172] (0xc0061a3d90) Data frame received for 1 I0429 00:17:46.841220 7 log.go:172] (0xc00225e000) (1) Data frame handling I0429 00:17:46.841266 7 log.go:172] (0xc00225e000) (1) Data frame sent I0429 00:17:46.841312 7 log.go:172] (0xc0061a3d90) (0xc00225e000) Stream removed, broadcasting: 1 I0429 00:17:46.841340 7 log.go:172] (0xc0061a3d90) Go away received I0429 00:17:46.841445 7 log.go:172] (0xc0061a3d90) (0xc00225e000) Stream removed, broadcasting: 1 I0429 00:17:46.841774 7 log.go:172] (0xc0061a3d90) (0xc001a8c140) Stream removed, broadcasting: 3 I0429 00:17:46.841835 7 log.go:172] (0xc0061a3d90) (0xc002178820) Stream removed, broadcasting: 5 Apr 29 00:17:46.841: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:17:46.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8084" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":162,"skipped":2737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:17:46.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a6b26ae2-fcea-459e-80d9-de46e2671f3c STEP: Creating a pod to test consume configMaps Apr 29 00:17:46.971: INFO: Waiting up to 5m0s for pod "pod-configmaps-70e4bc0d-7d00-49bd-ba14-66e3b66db9b5" in namespace "configmap-5300" to be "Succeeded or Failed" Apr 29 00:17:46.977: INFO: Pod "pod-configmaps-70e4bc0d-7d00-49bd-ba14-66e3b66db9b5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.953368ms Apr 29 00:17:48.981: INFO: Pod "pod-configmaps-70e4bc0d-7d00-49bd-ba14-66e3b66db9b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009709501s Apr 29 00:17:50.985: INFO: Pod "pod-configmaps-70e4bc0d-7d00-49bd-ba14-66e3b66db9b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013792148s STEP: Saw pod success Apr 29 00:17:50.985: INFO: Pod "pod-configmaps-70e4bc0d-7d00-49bd-ba14-66e3b66db9b5" satisfied condition "Succeeded or Failed" Apr 29 00:17:50.988: INFO: Trying to get logs from node latest-worker pod pod-configmaps-70e4bc0d-7d00-49bd-ba14-66e3b66db9b5 container configmap-volume-test: STEP: delete the pod Apr 29 00:17:51.025: INFO: Waiting for pod pod-configmaps-70e4bc0d-7d00-49bd-ba14-66e3b66db9b5 to disappear Apr 29 00:17:51.034: INFO: Pod pod-configmaps-70e4bc0d-7d00-49bd-ba14-66e3b66db9b5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:17:51.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5300" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:17:51.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7381 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 29 00:17:51.109: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 29 00:17:51.200: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 00:17:53.215: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 29 00:17:55.204: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:17:57.204: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:17:59.209: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:18:01.205: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:18:03.204: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:18:05.204: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:18:07.205: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:18:09.203: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:18:11.207: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 29 00:18:13.204: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 29 00:18:13.210: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 29 00:18:17.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.128:8080/dial?request=hostname&protocol=udp&host=10.244.2.108&port=8081&tries=1'] Namespace:pod-network-test-7381 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 00:18:17.233: INFO: >>> kubeConfig: /root/.kube/config I0429 00:18:17.272347 7 log.go:172] (0xc006452580) (0xc001d10fa0) Create stream I0429 00:18:17.272384 7 log.go:172] (0xc006452580) (0xc001d10fa0) Stream added, broadcasting: 1 I0429 00:18:17.274647 7 log.go:172] (0xc006452580) Reply frame received for 1 I0429 00:18:17.274710 7 log.go:172] (0xc006452580) (0xc001d11180) Create stream I0429 00:18:17.274728 7 log.go:172] (0xc006452580) (0xc001d11180) Stream added, broadcasting: 3 I0429 00:18:17.276119 7 log.go:172] (0xc006452580) Reply frame received for 3 I0429 00:18:17.276162 7 log.go:172] (0xc006452580) (0xc00225f360) Create stream I0429 00:18:17.276177 7 log.go:172] (0xc006452580) (0xc00225f360) Stream added, broadcasting: 5 I0429 00:18:17.277283 7 log.go:172] (0xc006452580) Reply frame received for 5 I0429 00:18:17.367840 7 log.go:172] (0xc006452580) Data frame received for 3 I0429 00:18:17.367872 7 log.go:172] (0xc001d11180) (3) Data frame handling I0429 00:18:17.367893 7 log.go:172] (0xc001d11180) (3) Data frame sent I0429 00:18:17.368037 7 log.go:172] (0xc006452580) Data frame received for 3 I0429 00:18:17.368069 7 log.go:172] (0xc001d11180) (3) Data frame handling I0429 00:18:17.368188 7 log.go:172] (0xc006452580) Data frame received for 5 I0429 00:18:17.368205 7 log.go:172] (0xc00225f360) (5) Data frame handling I0429 00:18:17.370049 7 log.go:172] (0xc006452580) Data frame received for 1 I0429 00:18:17.370079 7 log.go:172] (0xc001d10fa0) (1) Data frame handling I0429 00:18:17.370107 7 log.go:172] (0xc001d10fa0) (1) Data frame sent I0429 00:18:17.370133 7 log.go:172] (0xc006452580) (0xc001d10fa0) Stream removed, broadcasting: 1 I0429 00:18:17.370154 7 log.go:172] (0xc006452580) Go away received I0429 00:18:17.370258 7 log.go:172] (0xc006452580) (0xc001d10fa0) Stream removed, broadcasting: 1 I0429 00:18:17.370290 7 log.go:172] (0xc006452580) (0xc001d11180) Stream removed, broadcasting: 3 I0429 00:18:17.370316 7 log.go:172] (0xc006452580) (0xc00225f360) Stream removed, broadcasting: 5 Apr 29 00:18:17.370: INFO: Waiting for responses: map[] Apr 29 00:18:17.373: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.128:8080/dial?request=hostname&protocol=udp&host=10.244.1.127&port=8081&tries=1'] Namespace:pod-network-test-7381 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 00:18:17.373: INFO: >>> kubeConfig: /root/.kube/config I0429 00:18:17.405759 7 log.go:172] (0xc006452b00) (0xc001d11680) Create stream I0429 00:18:17.405827 7 log.go:172] (0xc006452b00) (0xc001d11680) Stream added, broadcasting: 1 I0429 00:18:17.408273 7 log.go:172] (0xc006452b00) Reply frame received for 1 I0429 00:18:17.408349 7 log.go:172] (0xc006452b00) (0xc001a8c3c0) Create stream I0429 00:18:17.408380 7 log.go:172] (0xc006452b00) (0xc001a8c3c0) Stream added, broadcasting: 3 I0429 00:18:17.409320 7 log.go:172] (0xc006452b00) Reply frame received for 3 I0429 00:18:17.409347 7 log.go:172] (0xc006452b00) (0xc001d11720) Create stream I0429 00:18:17.409357 7 log.go:172] (0xc006452b00) (0xc001d11720) Stream added, broadcasting: 5 I0429 00:18:17.410157 7 log.go:172] (0xc006452b00) Reply frame received for 5 I0429 00:18:17.486664 7 log.go:172] (0xc006452b00) Data frame received for 3 I0429 00:18:17.486757 7 log.go:172] (0xc001a8c3c0) (3) Data frame handling I0429 00:18:17.486804 7 log.go:172] (0xc001a8c3c0) (3) Data frame sent I0429 00:18:17.487242 7 log.go:172] (0xc006452b00) Data frame received for 5 I0429 00:18:17.487269 7 log.go:172] (0xc006452b00) Data frame received for 3 I0429 00:18:17.487292 7 log.go:172] (0xc001a8c3c0) (3) Data frame handling I0429 00:18:17.487324 7 log.go:172] (0xc001d11720) (5) Data frame handling I0429 00:18:17.488612 7 log.go:172] (0xc006452b00) Data frame received for 1 I0429 00:18:17.488629 7 log.go:172] (0xc001d11680) (1) Data frame handling I0429 00:18:17.488641 7 log.go:172] (0xc001d11680) (1) Data frame sent I0429 00:18:17.488655 7 log.go:172] (0xc006452b00) (0xc001d11680) Stream removed, broadcasting: 1 I0429 00:18:17.488707 7 log.go:172] (0xc006452b00) Go away received I0429 00:18:17.488740 7 log.go:172] (0xc006452b00) (0xc001d11680) Stream removed, broadcasting: 1 I0429 00:18:17.488760 7 log.go:172] (0xc006452b00) (0xc001a8c3c0) Stream removed, broadcasting: 3 I0429 00:18:17.488773 7 log.go:172] (0xc006452b00) (0xc001d11720) Stream removed, broadcasting: 5 Apr 29 00:18:17.488: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:18:17.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7381" for this suite. • [SLOW TEST:26.449 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:18:17.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-c50d782d-7fb7-4612-93a6-dd1889d9870e in namespace container-probe-9659 Apr 29 00:18:21.574: INFO: Started pod liveness-c50d782d-7fb7-4612-93a6-dd1889d9870e in namespace container-probe-9659 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 00:18:21.577: INFO: Initial restart count of pod liveness-c50d782d-7fb7-4612-93a6-dd1889d9870e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:22:22.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9659" for this suite. • [SLOW TEST:244.869 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2831,"failed":0} [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:22:22.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8677 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8677 I0429 00:22:22.682627 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8677, replica count: 2 I0429 00:22:25.733081 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 00:22:28.733451 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 00:22:28.733: INFO: Creating new exec pod Apr 29 00:22:33.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8677 execpodt7wwt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 29 00:22:36.929: INFO: stderr: "I0429 00:22:36.856846 2023 log.go:172] (0xc0008fc840) (0xc000890280) Create stream\nI0429 00:22:36.856890 2023 log.go:172] (0xc0008fc840) (0xc000890280) Stream added, broadcasting: 1\nI0429 00:22:36.859615 2023 log.go:172] (0xc0008fc840) Reply frame received for 1\nI0429 00:22:36.859673 2023 log.go:172] (0xc0008fc840) (0xc000850280) Create stream\nI0429 00:22:36.859691 2023 log.go:172] (0xc0008fc840) (0xc000850280) Stream added, broadcasting: 3\nI0429 00:22:36.860683 2023 log.go:172] (0xc0008fc840) Reply frame received for 3\nI0429 00:22:36.860733 2023 log.go:172] (0xc0008fc840) (0xc000590000) Create stream\nI0429 00:22:36.860749 2023 log.go:172] (0xc0008fc840) (0xc000590000) Stream added, broadcasting: 5\nI0429 00:22:36.862001 2023 log.go:172] (0xc0008fc840) Reply frame received for 5\nI0429 00:22:36.921478 2023 log.go:172] (0xc0008fc840) Data frame received for 5\nI0429 00:22:36.921508 2023 log.go:172] (0xc000590000) (5) Data frame handling\nI0429 00:22:36.921536 2023 log.go:172] (0xc000590000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0429 00:22:36.921840 2023 log.go:172] (0xc0008fc840) Data frame received for 5\nI0429 00:22:36.921866 2023 log.go:172] (0xc000590000) (5) Data frame handling\nI0429 00:22:36.921896 2023 log.go:172] (0xc000590000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0429 00:22:36.922197 2023 log.go:172] (0xc0008fc840) Data frame received for 5\nI0429 00:22:36.922233 2023 log.go:172] (0xc000590000) (5) Data frame handling\nI0429 00:22:36.922470 2023 log.go:172] (0xc0008fc840) Data frame received for 3\nI0429 00:22:36.922512 2023 log.go:172] (0xc000850280) (3) Data frame handling\nI0429 00:22:36.923790 2023 log.go:172] (0xc0008fc840) Data frame received for 1\nI0429 00:22:36.923810 2023 log.go:172] (0xc000890280) (1) Data frame handling\nI0429 00:22:36.923821 2023 log.go:172] (0xc000890280) (1) Data frame sent\nI0429 00:22:36.923830 2023 log.go:172] (0xc0008fc840) (0xc000890280) Stream removed, broadcasting: 1\nI0429 00:22:36.924108 2023 log.go:172] (0xc0008fc840) (0xc000890280) Stream removed, broadcasting: 1\nI0429 00:22:36.924123 2023 log.go:172] (0xc0008fc840) (0xc000850280) Stream removed, broadcasting: 3\nI0429 00:22:36.924130 2023 log.go:172] (0xc0008fc840) (0xc000590000) Stream removed, broadcasting: 5\n" Apr 29 00:22:36.929: INFO: stdout: "" Apr 29 00:22:36.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-8677 execpodt7wwt -- /bin/sh -x -c nc -zv -t -w 2 10.96.150.29 80' Apr 29 00:22:37.133: INFO: stderr: "I0429 00:22:37.062640 2057 log.go:172] (0xc000566790) (0xc0004c0000) Create stream\nI0429 00:22:37.062696 2057 log.go:172] (0xc000566790) (0xc0004c0000) Stream added, broadcasting: 1\nI0429 00:22:37.065522 2057 log.go:172] (0xc000566790) Reply frame received for 1\nI0429 00:22:37.065621 2057 log.go:172] (0xc000566790) (0xc000827360) Create stream\nI0429 00:22:37.065643 2057 log.go:172] (0xc000566790) (0xc000827360) Stream added, broadcasting: 3\nI0429 00:22:37.066917 2057 log.go:172] (0xc000566790) Reply frame received for 3\nI0429 00:22:37.066951 2057 log.go:172] (0xc000566790) (0xc000827540) Create stream\nI0429 00:22:37.066962 2057 log.go:172] (0xc000566790) (0xc000827540) Stream added, broadcasting: 5\nI0429 00:22:37.067987 2057 log.go:172] (0xc000566790) Reply frame received for 5\nI0429 00:22:37.125059 2057 log.go:172] (0xc000566790) Data frame received for 3\nI0429 00:22:37.125088 2057 log.go:172] (0xc000827360) (3) Data frame handling\nI0429 00:22:37.125103 2057 log.go:172] (0xc000566790) Data frame received for 5\nI0429 00:22:37.125107 2057 log.go:172] (0xc000827540) (5) Data frame handling\nI0429 00:22:37.125200 2057 log.go:172] (0xc000827540) (5) Data frame sent\nI0429 00:22:37.125211 2057 log.go:172] (0xc000566790) Data frame received for 5\nI0429 00:22:37.125216 2057 log.go:172] (0xc000827540) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.150.29 80\nConnection to 10.96.150.29 80 port [tcp/http] succeeded!\nI0429 00:22:37.127088 2057 log.go:172] (0xc000566790) Data frame received for 1\nI0429 00:22:37.127115 2057 log.go:172] (0xc0004c0000) (1) Data frame handling\nI0429 00:22:37.127128 2057 log.go:172] (0xc0004c0000) (1) Data frame sent\nI0429 00:22:37.127149 2057 log.go:172] (0xc000566790) (0xc0004c0000) Stream removed, broadcasting: 1\nI0429 00:22:37.127173 2057 log.go:172] (0xc000566790) Go away received\nI0429 00:22:37.127635 2057 log.go:172] (0xc000566790) (0xc0004c0000) Stream removed, broadcasting: 1\nI0429 00:22:37.127665 2057 log.go:172] (0xc000566790) (0xc000827360) Stream removed, broadcasting: 3\nI0429 00:22:37.127689 2057 log.go:172] (0xc000566790) (0xc000827540) Stream removed, broadcasting: 5\n" Apr 29 00:22:37.133: INFO: stdout: "" Apr 29 00:22:37.133: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:22:37.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8677" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.787 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":166,"skipped":2831,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:22:37.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 29 00:22:37.263: INFO: Waiting up to 5m0s for pod "var-expansion-ed8dab1a-be63-4cfb-b2b5-f536724bdcf1" in namespace "var-expansion-7868" to be "Succeeded or Failed" Apr 29 00:22:37.290: INFO: Pod "var-expansion-ed8dab1a-be63-4cfb-b2b5-f536724bdcf1": Phase="Pending", Reason="", readiness=false. Elapsed: 27.534321ms Apr 29 00:22:39.305: INFO: Pod "var-expansion-ed8dab1a-be63-4cfb-b2b5-f536724bdcf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042185418s Apr 29 00:22:41.309: INFO: Pod "var-expansion-ed8dab1a-be63-4cfb-b2b5-f536724bdcf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046333542s STEP: Saw pod success Apr 29 00:22:41.309: INFO: Pod "var-expansion-ed8dab1a-be63-4cfb-b2b5-f536724bdcf1" satisfied condition "Succeeded or Failed" Apr 29 00:22:41.313: INFO: Trying to get logs from node latest-worker pod var-expansion-ed8dab1a-be63-4cfb-b2b5-f536724bdcf1 container dapi-container: STEP: delete the pod Apr 29 00:22:41.366: INFO: Waiting for pod var-expansion-ed8dab1a-be63-4cfb-b2b5-f536724bdcf1 to disappear Apr 29 00:22:41.371: INFO: Pod var-expansion-ed8dab1a-be63-4cfb-b2b5-f536724bdcf1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:22:41.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7868" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2844,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:22:41.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a40e1713-b7cc-4563-8be6-704662bb8c8a STEP: Creating a pod to test consume configMaps Apr 29 00:22:41.472: INFO: Waiting up to 5m0s for pod "pod-configmaps-90075296-88d8-4ee3-8199-02706d5ff8ac" in namespace "configmap-59" to be "Succeeded or Failed" Apr 29 00:22:41.486: INFO: Pod "pod-configmaps-90075296-88d8-4ee3-8199-02706d5ff8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 13.978984ms Apr 29 00:22:43.641: INFO: Pod "pod-configmaps-90075296-88d8-4ee3-8199-02706d5ff8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169552446s Apr 29 00:22:45.646: INFO: Pod "pod-configmaps-90075296-88d8-4ee3-8199-02706d5ff8ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173986373s STEP: Saw pod success Apr 29 00:22:45.646: INFO: Pod "pod-configmaps-90075296-88d8-4ee3-8199-02706d5ff8ac" satisfied condition "Succeeded or Failed" Apr 29 00:22:45.648: INFO: Trying to get logs from node latest-worker pod pod-configmaps-90075296-88d8-4ee3-8199-02706d5ff8ac container configmap-volume-test: STEP: delete the pod Apr 29 00:22:45.736: INFO: Waiting for pod pod-configmaps-90075296-88d8-4ee3-8199-02706d5ff8ac to disappear Apr 29 00:22:45.748: INFO: Pod pod-configmaps-90075296-88d8-4ee3-8199-02706d5ff8ac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:22:45.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-59" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2846,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:22:45.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:22:57.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5446" for this suite. • [SLOW TEST:11.276 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":169,"skipped":2851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:22:57.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:22:57.491: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:22:59.502: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723716577, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723716577, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723716577, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723716577, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:23:02.557: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:23:02.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3072" for this suite. STEP: Destroying namespace "webhook-3072-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.753 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":170,"skipped":2958,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:23:02.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 29 00:23:02.888: INFO: Waiting up to 5m0s for pod "downward-api-126e23ae-2091-491f-a2a4-7fae782a04de" in namespace "downward-api-4397" to be "Succeeded or Failed" Apr 29 00:23:02.898: INFO: Pod "downward-api-126e23ae-2091-491f-a2a4-7fae782a04de": Phase="Pending", Reason="", readiness=false. Elapsed: 10.49125ms Apr 29 00:23:04.902: INFO: Pod "downward-api-126e23ae-2091-491f-a2a4-7fae782a04de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014340996s Apr 29 00:23:06.906: INFO: Pod "downward-api-126e23ae-2091-491f-a2a4-7fae782a04de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018598151s STEP: Saw pod success Apr 29 00:23:06.906: INFO: Pod "downward-api-126e23ae-2091-491f-a2a4-7fae782a04de" satisfied condition "Succeeded or Failed" Apr 29 00:23:06.910: INFO: Trying to get logs from node latest-worker pod downward-api-126e23ae-2091-491f-a2a4-7fae782a04de container dapi-container: STEP: delete the pod Apr 29 00:23:06.959: INFO: Waiting for pod downward-api-126e23ae-2091-491f-a2a4-7fae782a04de to disappear Apr 29 00:23:06.964: INFO: Pod downward-api-126e23ae-2091-491f-a2a4-7fae782a04de no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:23:06.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4397" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2974,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:23:06.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-0a87a69b-13d3-458b-9909-c1ef6d2228aa STEP: Creating secret with name s-test-opt-upd-a6c17427-97dc-4ae3-8fd6-642fa90c7daa STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0a87a69b-13d3-458b-9909-c1ef6d2228aa STEP: Updating secret s-test-opt-upd-a6c17427-97dc-4ae3-8fd6-642fa90c7daa STEP: Creating secret with name s-test-opt-create-f4c4017f-81bd-48e5-9fae-c08095514912 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:23:15.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8607" for this suite. • [SLOW TEST:8.169 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2988,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:23:15.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 29 00:23:23.258: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 00:23:23.265: INFO: Pod pod-with-poststart-http-hook still exists Apr 29 00:23:25.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 00:23:25.270: INFO: Pod pod-with-poststart-http-hook still exists Apr 29 00:23:27.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 00:23:27.269: INFO: Pod pod-with-poststart-http-hook still exists Apr 29 00:23:29.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 00:23:29.270: INFO: Pod pod-with-poststart-http-hook still exists Apr 29 00:23:31.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 00:23:31.269: INFO: Pod pod-with-poststart-http-hook still exists Apr 29 00:23:33.265: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 29 00:23:33.270: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:23:33.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9717" for this suite. • [SLOW TEST:18.140 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2993,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:23:33.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-86aad52a-2d82-45fe-9299-43918bfd907a STEP: Creating a pod to test consume secrets Apr 29 00:23:33.378: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-258efe51-6a09-42b3-a7e6-aa045dbae5c9" in namespace "projected-7372" to be "Succeeded or Failed" Apr 29 00:23:33.396: INFO: Pod "pod-projected-secrets-258efe51-6a09-42b3-a7e6-aa045dbae5c9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.101593ms Apr 29 00:23:35.443: INFO: Pod "pod-projected-secrets-258efe51-6a09-42b3-a7e6-aa045dbae5c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065452526s Apr 29 00:23:37.448: INFO: Pod "pod-projected-secrets-258efe51-6a09-42b3-a7e6-aa045dbae5c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070043464s STEP: Saw pod success Apr 29 00:23:37.448: INFO: Pod "pod-projected-secrets-258efe51-6a09-42b3-a7e6-aa045dbae5c9" satisfied condition "Succeeded or Failed" Apr 29 00:23:37.450: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-258efe51-6a09-42b3-a7e6-aa045dbae5c9 container projected-secret-volume-test: STEP: delete the pod Apr 29 00:23:37.552: INFO: Waiting for pod pod-projected-secrets-258efe51-6a09-42b3-a7e6-aa045dbae5c9 to disappear Apr 29 00:23:37.569: INFO: Pod pod-projected-secrets-258efe51-6a09-42b3-a7e6-aa045dbae5c9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:23:37.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7372" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2998,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:23:37.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:23:37.740: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bad88210-1a25-4685-8672-2c07fb565e03" in namespace "projected-674" to be "Succeeded or Failed" Apr 29 00:23:37.759: INFO: Pod "downwardapi-volume-bad88210-1a25-4685-8672-2c07fb565e03": Phase="Pending", Reason="", readiness=false. Elapsed: 19.102755ms Apr 29 00:23:39.870: INFO: Pod "downwardapi-volume-bad88210-1a25-4685-8672-2c07fb565e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129384383s Apr 29 00:23:41.873: INFO: Pod "downwardapi-volume-bad88210-1a25-4685-8672-2c07fb565e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133191997s STEP: Saw pod success Apr 29 00:23:41.873: INFO: Pod "downwardapi-volume-bad88210-1a25-4685-8672-2c07fb565e03" satisfied condition "Succeeded or Failed" Apr 29 00:23:41.877: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bad88210-1a25-4685-8672-2c07fb565e03 container client-container: STEP: delete the pod Apr 29 00:23:41.895: INFO: Waiting for pod downwardapi-volume-bad88210-1a25-4685-8672-2c07fb565e03 to disappear Apr 29 00:23:41.900: INFO: Pod downwardapi-volume-bad88210-1a25-4685-8672-2c07fb565e03 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:23:41.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-674" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3004,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:23:41.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-ba4d7da3-47c3-4cb1-9d8a-8171f5a2cb02 in namespace container-probe-4698 Apr 29 00:23:46.059: INFO: Started pod test-webserver-ba4d7da3-47c3-4cb1-9d8a-8171f5a2cb02 in namespace container-probe-4698 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 00:23:46.062: INFO: Initial restart count of pod test-webserver-ba4d7da3-47c3-4cb1-9d8a-8171f5a2cb02 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:27:46.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4698" for this suite. • [SLOW TEST:245.024 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":3012,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:27:46.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 00:27:51.225: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:27:51.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7878" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3013,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:27:51.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-6258ae49-dcfd-4a81-a90d-41dac73c4ad6 in namespace container-probe-947 Apr 29 00:27:55.398: INFO: Started pod liveness-6258ae49-dcfd-4a81-a90d-41dac73c4ad6 in namespace container-probe-947 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 00:27:55.400: INFO: Initial restart count of pod liveness-6258ae49-dcfd-4a81-a90d-41dac73c4ad6 is 0 Apr 29 00:28:15.620: INFO: Restart count of pod container-probe-947/liveness-6258ae49-dcfd-4a81-a90d-41dac73c4ad6 is now 1 (20.219431538s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:28:15.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-947" for this suite. • [SLOW TEST:24.417 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3033,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:28:15.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 29 00:28:15.905: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 00:28:15.956: INFO: Waiting for terminating namespaces to be deleted... Apr 29 00:28:15.958: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 29 00:28:16.001: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:28:16.001: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 00:28:16.001: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:28:16.001: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 00:28:16.001: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 29 00:28:16.021: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:28:16.021: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 00:28:16.021: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:28:16.021: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c23b73f7-7ae1-48a6-8563-15466fadfd10 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c23b73f7-7ae1-48a6-8563-15466fadfd10 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c23b73f7-7ae1-48a6-8563-15466fadfd10 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:28:24.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-997" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.575 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":179,"skipped":3046,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:28:24.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-43f7b836-9240-47ed-8969-79f5573980db STEP: Creating a pod to test consume secrets Apr 29 00:28:24.345: INFO: Waiting up to 5m0s for pod "pod-secrets-60bc6b1a-5709-41df-806f-a74a0d1af5bc" in namespace "secrets-7101" to be "Succeeded or Failed" Apr 29 00:28:24.348: INFO: Pod "pod-secrets-60bc6b1a-5709-41df-806f-a74a0d1af5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.344479ms Apr 29 00:28:26.352: INFO: Pod "pod-secrets-60bc6b1a-5709-41df-806f-a74a0d1af5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006989452s Apr 29 00:28:28.356: INFO: Pod "pod-secrets-60bc6b1a-5709-41df-806f-a74a0d1af5bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011319733s STEP: Saw pod success Apr 29 00:28:28.356: INFO: Pod "pod-secrets-60bc6b1a-5709-41df-806f-a74a0d1af5bc" satisfied condition "Succeeded or Failed" Apr 29 00:28:28.360: INFO: Trying to get logs from node latest-worker pod pod-secrets-60bc6b1a-5709-41df-806f-a74a0d1af5bc container secret-volume-test: STEP: delete the pod Apr 29 00:28:28.379: INFO: Waiting for pod pod-secrets-60bc6b1a-5709-41df-806f-a74a0d1af5bc to disappear Apr 29 00:28:28.390: INFO: Pod pod-secrets-60bc6b1a-5709-41df-806f-a74a0d1af5bc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:28:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7101" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:28:28.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 29 00:28:28.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 29 00:28:28.666: INFO: stderr: "" Apr 29 00:28:28.666: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:28:28.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-639" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":181,"skipped":3078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:28:28.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:28:28.760: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3850b8b-ed94-4537-93d4-51934bfda04a" in namespace "downward-api-1861" to be "Succeeded or Failed" Apr 29 00:28:28.796: INFO: Pod "downwardapi-volume-b3850b8b-ed94-4537-93d4-51934bfda04a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.633192ms Apr 29 00:28:30.799: INFO: Pod "downwardapi-volume-b3850b8b-ed94-4537-93d4-51934bfda04a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03917026s Apr 29 00:28:32.804: INFO: Pod "downwardapi-volume-b3850b8b-ed94-4537-93d4-51934bfda04a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043608886s STEP: Saw pod success Apr 29 00:28:32.804: INFO: Pod "downwardapi-volume-b3850b8b-ed94-4537-93d4-51934bfda04a" satisfied condition "Succeeded or Failed" Apr 29 00:28:32.807: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b3850b8b-ed94-4537-93d4-51934bfda04a container client-container: STEP: delete the pod Apr 29 00:28:32.826: INFO: Waiting for pod downwardapi-volume-b3850b8b-ed94-4537-93d4-51934bfda04a to disappear Apr 29 00:28:32.830: INFO: Pod downwardapi-volume-b3850b8b-ed94-4537-93d4-51934bfda04a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:28:32.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1861" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:28:32.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-f65a917f-7f11-4531-9383-a860247cdc8b STEP: Creating a pod to test consume secrets Apr 29 00:28:32.985: INFO: Waiting up to 5m0s for pod "pod-secrets-9b3b61e3-0b91-48d8-9c9a-7ac6f7c91ce0" in namespace "secrets-1621" to be "Succeeded or Failed" Apr 29 00:28:33.010: INFO: Pod "pod-secrets-9b3b61e3-0b91-48d8-9c9a-7ac6f7c91ce0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.59802ms Apr 29 00:28:35.014: INFO: Pod "pod-secrets-9b3b61e3-0b91-48d8-9c9a-7ac6f7c91ce0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028580768s Apr 29 00:28:37.018: INFO: Pod "pod-secrets-9b3b61e3-0b91-48d8-9c9a-7ac6f7c91ce0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032817052s STEP: Saw pod success Apr 29 00:28:37.018: INFO: Pod "pod-secrets-9b3b61e3-0b91-48d8-9c9a-7ac6f7c91ce0" satisfied condition "Succeeded or Failed" Apr 29 00:28:37.021: INFO: Trying to get logs from node latest-worker pod pod-secrets-9b3b61e3-0b91-48d8-9c9a-7ac6f7c91ce0 container secret-volume-test: STEP: delete the pod Apr 29 00:28:37.042: INFO: Waiting for pod pod-secrets-9b3b61e3-0b91-48d8-9c9a-7ac6f7c91ce0 to disappear Apr 29 00:28:37.057: INFO: Pod pod-secrets-9b3b61e3-0b91-48d8-9c9a-7ac6f7c91ce0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:28:37.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1621" for this suite. STEP: Destroying namespace "secret-namespace-7124" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3148,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:28:37.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 29 00:28:37.155: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:28:37.159: INFO: Number of nodes with available pods: 0 Apr 29 00:28:37.159: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:28:38.165: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:28:38.175: INFO: Number of nodes with available pods: 0 Apr 29 00:28:38.175: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:28:39.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:28:39.166: INFO: Number of nodes with available pods: 0 Apr 29 00:28:39.166: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:28:40.163: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:28:40.166: INFO: Number of nodes with available pods: 1 Apr 29 00:28:40.166: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:28:41.166: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:28:41.169: INFO: Number of nodes with available pods: 2 Apr 29 00:28:41.169: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 29 00:28:41.194: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:28:41.199: INFO: Number of nodes with available pods: 2 Apr 29 00:28:41.199: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8710, will wait for the garbage collector to delete the pods Apr 29 00:28:42.408: INFO: Deleting DaemonSet.extensions daemon-set took: 110.821931ms Apr 29 00:28:42.709: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.254925ms Apr 29 00:28:53.112: INFO: Number of nodes with available pods: 0 Apr 29 00:28:53.112: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 00:28:53.116: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8710/daemonsets","resourceVersion":"11854603"},"items":null} Apr 29 00:28:53.118: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8710/pods","resourceVersion":"11854603"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:28:53.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8710" for this suite. • [SLOW TEST:16.066 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":184,"skipped":3164,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:28:53.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4753.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4753.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4753.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4753.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4753.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4753.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 00:28:59.350: INFO: DNS probes using dns-4753/dns-test-5d918240-d635-49c7-bc53-4e106b4ec350 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:28:59.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4753" for this suite. • [SLOW TEST:6.262 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":185,"skipped":3170,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:28:59.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:28:59.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2990' Apr 29 00:28:59.882: INFO: stderr: "" Apr 29 00:28:59.882: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 29 00:28:59.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2990' Apr 29 00:29:00.163: INFO: stderr: "" Apr 29 00:29:00.163: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 29 00:29:01.166: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:29:01.166: INFO: Found 0 / 1 Apr 29 00:29:02.166: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:29:02.166: INFO: Found 0 / 1 Apr 29 00:29:03.167: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:29:03.168: INFO: Found 1 / 1 Apr 29 00:29:03.168: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 29 00:29:03.171: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:29:03.171: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 29 00:29:03.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-5jqfm --namespace=kubectl-2990' Apr 29 00:29:03.277: INFO: stderr: "" Apr 29 00:29:03.277: INFO: stdout: "Name: agnhost-master-5jqfm\nNamespace: kubectl-2990\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Wed, 29 Apr 2020 00:28:59 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.138\nIPs:\n IP: 10.244.1.138\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://4855a3582a72dfc8a689f9b8df91bcf81f36b8494ac256a8fac70143efc3f2a9\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 29 Apr 2020 00:29:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-nnnsz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-nnnsz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-nnnsz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-2990/agnhost-master-5jqfm to latest-worker2\n Normal Pulled 2s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Apr 29 00:29:03.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2990' Apr 29 00:29:03.383: INFO: stderr: "" Apr 29 00:29:03.384: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2990\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-5jqfm\n" Apr 29 00:29:03.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2990' Apr 29 00:29:03.484: INFO: stderr: "" Apr 29 00:29:03.484: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2990\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.85.166\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.138:6379\nSession Affinity: None\nEvents: \n" Apr 29 00:29:03.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 29 00:29:03.633: INFO: stderr: "" Apr 29 00:29:03.633: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 29 Apr 2020 00:28:59 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 29 Apr 2020 00:26:13 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 29 Apr 2020 00:26:13 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 29 Apr 2020 00:26:13 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 29 Apr 2020 00:26:13 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 44d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 44d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 44d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 44d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 44d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 44d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 29 00:29:03.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-2990' Apr 29 00:29:03.748: INFO: stderr: "" Apr 29 00:29:03.748: INFO: stdout: "Name: kubectl-2990\nLabels: e2e-framework=kubectl\n e2e-run=067565bc-1640-414d-8e1c-5b736f74e3cc\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:29:03.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2990" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":186,"skipped":3182,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:29:03.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 29 00:29:03.817: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 00:29:03.857: INFO: Waiting for terminating namespaces to be deleted... Apr 29 00:29:03.859: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 29 00:29:03.864: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:29:03.865: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 00:29:03.865: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:29:03.865: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 00:29:03.865: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 29 00:29:03.871: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:29:03.871: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 00:29:03.871: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:29:03.871: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 00:29:03.871: INFO: agnhost-master-5jqfm from kubectl-2990 started at 2020-04-29 00:28:59 +0000 UTC (1 container statuses recorded) Apr 29 00:29:03.871: INFO: Container agnhost-master ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160a22db4d1b5e9f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:29:04.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5885" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":187,"skipped":3192,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:29:04.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-a072d012-a87e-4a40-a264-23489fb1537d in namespace container-probe-6276 Apr 29 00:29:09.077: INFO: Started pod liveness-a072d012-a87e-4a40-a264-23489fb1537d in namespace container-probe-6276 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 00:29:09.089: INFO: Initial restart count of pod liveness-a072d012-a87e-4a40-a264-23489fb1537d is 0 Apr 29 00:29:25.456: INFO: Restart count of pod container-probe-6276/liveness-a072d012-a87e-4a40-a264-23489fb1537d is now 1 (16.36716174s elapsed) Apr 29 00:29:45.496: INFO: Restart count of pod container-probe-6276/liveness-a072d012-a87e-4a40-a264-23489fb1537d is now 2 (36.406767963s elapsed) Apr 29 00:30:05.647: INFO: Restart count of pod container-probe-6276/liveness-a072d012-a87e-4a40-a264-23489fb1537d is now 3 (56.557397477s elapsed) Apr 29 00:30:25.711: INFO: Restart count of pod container-probe-6276/liveness-a072d012-a87e-4a40-a264-23489fb1537d is now 4 (1m16.621816017s elapsed) Apr 29 00:31:34.418: INFO: Restart count of pod container-probe-6276/liveness-a072d012-a87e-4a40-a264-23489fb1537d is now 5 (2m25.329193832s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:31:34.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6276" for this suite. • [SLOW TEST:149.574 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3203,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:31:34.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-d5670617-ff4a-43fa-84b7-1bef38641d34 STEP: Creating a pod to test consume configMaps Apr 29 00:31:34.542: INFO: Waiting up to 5m0s for pod "pod-configmaps-9034be9a-c389-4ab8-9898-f8a4a506d9f4" in namespace "configmap-6432" to be "Succeeded or Failed" Apr 29 00:31:34.546: INFO: Pod "pod-configmaps-9034be9a-c389-4ab8-9898-f8a4a506d9f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.967434ms Apr 29 00:31:36.550: INFO: Pod "pod-configmaps-9034be9a-c389-4ab8-9898-f8a4a506d9f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008325061s Apr 29 00:31:38.555: INFO: Pod "pod-configmaps-9034be9a-c389-4ab8-9898-f8a4a506d9f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012834508s STEP: Saw pod success Apr 29 00:31:38.555: INFO: Pod "pod-configmaps-9034be9a-c389-4ab8-9898-f8a4a506d9f4" satisfied condition "Succeeded or Failed" Apr 29 00:31:38.558: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9034be9a-c389-4ab8-9898-f8a4a506d9f4 container configmap-volume-test: STEP: delete the pod Apr 29 00:31:38.594: INFO: Waiting for pod pod-configmaps-9034be9a-c389-4ab8-9898-f8a4a506d9f4 to disappear Apr 29 00:31:38.604: INFO: Pod pod-configmaps-9034be9a-c389-4ab8-9898-f8a4a506d9f4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:31:38.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6432" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3218,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:31:38.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-9930ded1-bb29-4d67-8fbc-0cec7eb2f24a STEP: Creating a pod to test consume secrets Apr 29 00:31:38.763: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e90bf3d3-6643-4aa2-80ea-bae9bdb92bc9" in namespace "projected-1396" to be "Succeeded or Failed" Apr 29 00:31:38.772: INFO: Pod "pod-projected-secrets-e90bf3d3-6643-4aa2-80ea-bae9bdb92bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263794ms Apr 29 00:31:40.776: INFO: Pod "pod-projected-secrets-e90bf3d3-6643-4aa2-80ea-bae9bdb92bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012390124s Apr 29 00:31:42.780: INFO: Pod "pod-projected-secrets-e90bf3d3-6643-4aa2-80ea-bae9bdb92bc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016512563s STEP: Saw pod success Apr 29 00:31:42.780: INFO: Pod "pod-projected-secrets-e90bf3d3-6643-4aa2-80ea-bae9bdb92bc9" satisfied condition "Succeeded or Failed" Apr 29 00:31:42.783: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-e90bf3d3-6643-4aa2-80ea-bae9bdb92bc9 container secret-volume-test: STEP: delete the pod Apr 29 00:31:42.840: INFO: Waiting for pod pod-projected-secrets-e90bf3d3-6643-4aa2-80ea-bae9bdb92bc9 to disappear Apr 29 00:31:42.843: INFO: Pod pod-projected-secrets-e90bf3d3-6643-4aa2-80ea-bae9bdb92bc9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:31:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1396" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3222,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:31:42.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:31:42.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05c20790-9f9f-495c-878b-dcc6f894a926" in namespace "projected-7492" to be "Succeeded or Failed" Apr 29 00:31:42.928: INFO: Pod "downwardapi-volume-05c20790-9f9f-495c-878b-dcc6f894a926": Phase="Pending", Reason="", readiness=false. Elapsed: 3.455525ms Apr 29 00:31:44.931: INFO: Pod "downwardapi-volume-05c20790-9f9f-495c-878b-dcc6f894a926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007156346s Apr 29 00:31:46.936: INFO: Pod "downwardapi-volume-05c20790-9f9f-495c-878b-dcc6f894a926": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011289413s STEP: Saw pod success Apr 29 00:31:46.936: INFO: Pod "downwardapi-volume-05c20790-9f9f-495c-878b-dcc6f894a926" satisfied condition "Succeeded or Failed" Apr 29 00:31:46.939: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-05c20790-9f9f-495c-878b-dcc6f894a926 container client-container: STEP: delete the pod Apr 29 00:31:46.987: INFO: Waiting for pod downwardapi-volume-05c20790-9f9f-495c-878b-dcc6f894a926 to disappear Apr 29 00:31:47.012: INFO: Pod downwardapi-volume-05c20790-9f9f-495c-878b-dcc6f894a926 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:31:47.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7492" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3231,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:31:47.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:31:47.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 29 00:31:47.650: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T00:31:47Z generation:1 name:name1 resourceVersion:11855344 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:76b121c3-4a3d-487b-87ec-0f683ce9fa92] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 29 00:31:57.656: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T00:31:57Z generation:1 name:name2 resourceVersion:11855391 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11b28037-f313-48fd-982e-c43519f098e7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 29 00:32:07.662: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T00:31:47Z generation:2 name:name1 resourceVersion:11855421 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:76b121c3-4a3d-487b-87ec-0f683ce9fa92] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 29 00:32:17.668: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T00:31:57Z generation:2 name:name2 resourceVersion:11855449 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11b28037-f313-48fd-982e-c43519f098e7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 29 00:32:27.678: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T00:31:47Z generation:2 name:name1 resourceVersion:11855479 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:76b121c3-4a3d-487b-87ec-0f683ce9fa92] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 29 00:32:37.686: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T00:31:57Z generation:2 name:name2 resourceVersion:11855509 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11b28037-f313-48fd-982e-c43519f098e7] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:32:48.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2135" for this suite. • [SLOW TEST:61.182 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":192,"skipped":3244,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:32:48.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 29 00:32:48.238: INFO: PodSpec: initContainers in spec.initContainers Apr 29 00:33:36.913: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7f95c7b2-1490-498d-8793-a2c3264ca6e1", GenerateName:"", Namespace:"init-container-1767", SelfLink:"/api/v1/namespaces/init-container-1767/pods/pod-init-7f95c7b2-1490-498d-8793-a2c3264ca6e1", UID:"2dea1bcd-f230-4e2e-8501-16c9486ff8df", ResourceVersion:"11855721", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723717168, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"238404877"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8g98j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0047a0300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8g98j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8g98j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8g98j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002e807d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009ce850), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002e80860)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002e80880)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002e80888), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002e8088c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717168, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717168, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717168, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717168, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.128", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.128"}}, StartTime:(*v1.Time)(0xc002aa0ae0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009ce930)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009ce9a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://577e05ea13af6c2374b740c5666b8033a26226a46ec5dc91a15384bba8587222", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002aa0ca0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002aa0b00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002e8090f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:33:36.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1767" for this suite. • [SLOW TEST:48.717 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":193,"skipped":3263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:33:36.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 29 00:33:36.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8575' Apr 29 00:33:40.225: INFO: stderr: "" Apr 29 00:33:40.225: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 29 00:33:40.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8575' Apr 29 00:33:52.752: INFO: stderr: "" Apr 29 00:33:52.752: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:33:52.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8575" for this suite. • [SLOW TEST:15.845 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":194,"skipped":3296,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:33:52.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:33:52.843: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 29 00:33:57.870: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 29 00:33:57.871: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 29 00:34:01.958: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3649 /apis/apps/v1/namespaces/deployment-3649/deployments/test-cleanup-deployment 70118923-af88-4e3b-b500-9527274d7ad0 11855878 1 2020-04-29 00:33:57 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e80ea8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-29 00:33:58 +0000 UTC,LastTransitionTime:2020-04-29 00:33:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-577c77b589" has successfully progressed.,LastUpdateTime:2020-04-29 00:34:01 +0000 UTC,LastTransitionTime:2020-04-29 00:33:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 29 00:34:01.962: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-3649 /apis/apps/v1/namespaces/deployment-3649/replicasets/test-cleanup-deployment-577c77b589 cc52be82-83b0-4b3f-8755-f392f2a2e13b 11855867 1 2020-04-29 00:33:57 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 70118923-af88-4e3b-b500-9527274d7ad0 0xc002526fe7 0xc002526fe8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002527058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 29 00:34:01.965: INFO: Pod "test-cleanup-deployment-577c77b589-j6drb" is available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-j6drb test-cleanup-deployment-577c77b589- deployment-3649 /api/v1/namespaces/deployment-3649/pods/test-cleanup-deployment-577c77b589-j6drb 08d3c5c0-97b0-4fb9-a1da-bf26163c3d04 11855866 0 2020-04-29 00:33:57 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 cc52be82-83b0-4b3f-8755-f392f2a2e13b 0xc0054dd8c7 0xc0054dd8c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pq57l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pq57l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pq57l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:33:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:34:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:34:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:33:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.131,StartTime:2020-04-29 00:33:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 00:34:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://3e64586832d936b33a71671f3435ee3bd3efe977acc2421a244f56bedb283520,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:01.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3649" for this suite. • [SLOW TEST:9.205 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":195,"skipped":3313,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:01.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:06.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-39" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3316,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:06.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 29 00:34:12.664: INFO: Successfully updated pod "adopt-release-8nx8x" STEP: Checking that the Job readopts the Pod Apr 29 00:34:12.664: INFO: Waiting up to 15m0s for pod "adopt-release-8nx8x" in namespace "job-888" to be "adopted" Apr 29 00:34:12.681: INFO: Pod "adopt-release-8nx8x": Phase="Running", Reason="", readiness=true. Elapsed: 17.183628ms Apr 29 00:34:14.685: INFO: Pod "adopt-release-8nx8x": Phase="Running", Reason="", readiness=true. Elapsed: 2.020839806s Apr 29 00:34:14.685: INFO: Pod "adopt-release-8nx8x" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 29 00:34:15.194: INFO: Successfully updated pod "adopt-release-8nx8x" STEP: Checking that the Job releases the Pod Apr 29 00:34:15.194: INFO: Waiting up to 15m0s for pod "adopt-release-8nx8x" in namespace "job-888" to be "released" Apr 29 00:34:15.200: INFO: Pod "adopt-release-8nx8x": Phase="Running", Reason="", readiness=true. Elapsed: 6.565825ms Apr 29 00:34:17.205: INFO: Pod "adopt-release-8nx8x": Phase="Running", Reason="", readiness=true. Elapsed: 2.01115926s Apr 29 00:34:17.205: INFO: Pod "adopt-release-8nx8x" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:17.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-888" for this suite. • [SLOW TEST:11.138 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":197,"skipped":3324,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:17.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:34:17.326: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.125371ms) Apr 29 00:34:17.329: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.149348ms) Apr 29 00:34:17.332: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.383533ms) Apr 29 00:34:17.336: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.273402ms) Apr 29 00:34:17.339: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.482863ms) Apr 29 00:34:17.343: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.427233ms) Apr 29 00:34:17.346: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.121358ms) Apr 29 00:34:17.349: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.354433ms) Apr 29 00:34:17.353: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.523126ms) Apr 29 00:34:17.356: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.052117ms) Apr 29 00:34:17.359: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.459476ms) Apr 29 00:34:17.363: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.212881ms) Apr 29 00:34:17.366: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.057875ms) Apr 29 00:34:17.369: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.109284ms) Apr 29 00:34:17.372: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.322709ms) Apr 29 00:34:17.375: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.236232ms) Apr 29 00:34:17.378: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.789557ms) Apr 29 00:34:17.381: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.810572ms) Apr 29 00:34:17.384: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.028546ms) Apr 29 00:34:17.387: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.705443ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:17.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":198,"skipped":3338,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:17.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 29 00:34:17.459: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:32.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3917" for this suite. • [SLOW TEST:15.599 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3353,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:32.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0429 00:34:43.094244 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 29 00:34:43.094: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:43.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-414" for this suite. • [SLOW TEST:10.143 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":200,"skipped":3365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:43.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-d592ed5b-93bc-4a4a-ada7-52ce773c2623 STEP: Creating a pod to test consume secrets Apr 29 00:34:43.245: INFO: Waiting up to 5m0s for pod "pod-secrets-6ec58795-3f5b-47c1-b459-740bb81d7c51" in namespace "secrets-7667" to be "Succeeded or Failed" Apr 29 00:34:43.249: INFO: Pod "pod-secrets-6ec58795-3f5b-47c1-b459-740bb81d7c51": Phase="Pending", Reason="", readiness=false. Elapsed: 3.38025ms Apr 29 00:34:45.253: INFO: Pod "pod-secrets-6ec58795-3f5b-47c1-b459-740bb81d7c51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008032803s Apr 29 00:34:47.257: INFO: Pod "pod-secrets-6ec58795-3f5b-47c1-b459-740bb81d7c51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012108416s STEP: Saw pod success Apr 29 00:34:47.257: INFO: Pod "pod-secrets-6ec58795-3f5b-47c1-b459-740bb81d7c51" satisfied condition "Succeeded or Failed" Apr 29 00:34:47.260: INFO: Trying to get logs from node latest-worker pod pod-secrets-6ec58795-3f5b-47c1-b459-740bb81d7c51 container secret-volume-test: STEP: delete the pod Apr 29 00:34:47.280: INFO: Waiting for pod pod-secrets-6ec58795-3f5b-47c1-b459-740bb81d7c51 to disappear Apr 29 00:34:47.284: INFO: Pod pod-secrets-6ec58795-3f5b-47c1-b459-740bb81d7c51 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:47.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7667" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3411,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:47.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:47.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5843" for this suite. STEP: Destroying namespace "nspatchtest-45d2608b-ce65-4cf5-9924-09d6dbd194f7-1266" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":202,"skipped":3418,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:47.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-aeb06142-f36d-4d9b-b120-5cbd9071729c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:34:53.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9708" for this suite. • [SLOW TEST:6.447 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3425,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:34:53.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:34:56.825: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:34:58.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717296, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717296, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717296, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717296, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:35:01.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:35:01.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7386" for this suite. STEP: Destroying namespace "webhook-7386-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.116 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":204,"skipped":3435,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:35:02.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:35:02.097: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-19fe3424-79bc-4a05-a4fa-9b6d592483b3" in namespace "security-context-test-4639" to be "Succeeded or Failed" Apr 29 00:35:02.112: INFO: Pod "busybox-readonly-false-19fe3424-79bc-4a05-a4fa-9b6d592483b3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.452817ms Apr 29 00:35:04.116: INFO: Pod "busybox-readonly-false-19fe3424-79bc-4a05-a4fa-9b6d592483b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019810034s Apr 29 00:35:06.121: INFO: Pod "busybox-readonly-false-19fe3424-79bc-4a05-a4fa-9b6d592483b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024194885s Apr 29 00:35:06.121: INFO: Pod "busybox-readonly-false-19fe3424-79bc-4a05-a4fa-9b6d592483b3" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:35:06.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4639" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3445,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:35:06.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 29 00:35:06.198: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:35:13.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8937" for this suite. • [SLOW TEST:7.348 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":206,"skipped":3447,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:35:13.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:35:13.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8" in namespace "projected-2613" to be "Succeeded or Failed" Apr 29 00:35:13.561: INFO: Pod "downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022544ms Apr 29 00:35:15.565: INFO: Pod "downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007798566s Apr 29 00:35:17.569: INFO: Pod "downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8": Phase="Running", Reason="", readiness=true. Elapsed: 4.011112157s Apr 29 00:35:19.573: INFO: Pod "downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015638293s STEP: Saw pod success Apr 29 00:35:19.573: INFO: Pod "downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8" satisfied condition "Succeeded or Failed" Apr 29 00:35:19.576: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8 container client-container: STEP: delete the pod Apr 29 00:35:19.636: INFO: Waiting for pod downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8 to disappear Apr 29 00:35:19.704: INFO: Pod downwardapi-volume-24964723-0a57-4597-b2b6-29e727599cd8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:35:19.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2613" for this suite. • [SLOW TEST:6.234 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3447,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:35:19.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-3fa89e9d-905e-491a-adcb-600ad125770a STEP: Creating a pod to test consume configMaps Apr 29 00:35:19.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-f19bdafa-0300-41cb-a164-e82e17c9dcec" in namespace "configmap-3493" to be "Succeeded or Failed" Apr 29 00:35:19.839: INFO: Pod "pod-configmaps-f19bdafa-0300-41cb-a164-e82e17c9dcec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.381685ms Apr 29 00:35:21.926: INFO: Pod "pod-configmaps-f19bdafa-0300-41cb-a164-e82e17c9dcec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094103392s Apr 29 00:35:23.930: INFO: Pod "pod-configmaps-f19bdafa-0300-41cb-a164-e82e17c9dcec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098076576s STEP: Saw pod success Apr 29 00:35:23.930: INFO: Pod "pod-configmaps-f19bdafa-0300-41cb-a164-e82e17c9dcec" satisfied condition "Succeeded or Failed" Apr 29 00:35:23.934: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f19bdafa-0300-41cb-a164-e82e17c9dcec container configmap-volume-test: STEP: delete the pod Apr 29 00:35:24.007: INFO: Waiting for pod pod-configmaps-f19bdafa-0300-41cb-a164-e82e17c9dcec to disappear Apr 29 00:35:24.011: INFO: Pod pod-configmaps-f19bdafa-0300-41cb-a164-e82e17c9dcec no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:35:24.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3493" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3448,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:35:24.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0429 00:35:25.152731 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 29 00:35:25.152: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:35:25.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5049" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":209,"skipped":3480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:35:25.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 29 00:35:25.235: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix147312716/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:35:25.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2034" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":210,"skipped":3514,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:35:25.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6015 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6015 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6015 Apr 29 00:35:25.528: INFO: Found 0 stateful pods, waiting for 1 Apr 29 00:35:35.532: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 29 00:35:35.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:35:35.763: INFO: stderr: "I0429 00:35:35.665961 2330 log.go:172] (0xc0009766e0) (0xc000404aa0) Create stream\nI0429 00:35:35.666030 2330 log.go:172] (0xc0009766e0) (0xc000404aa0) Stream added, broadcasting: 1\nI0429 00:35:35.669389 2330 log.go:172] (0xc0009766e0) Reply frame received for 1\nI0429 00:35:35.669435 2330 log.go:172] (0xc0009766e0) (0xc000aea000) Create stream\nI0429 00:35:35.669453 2330 log.go:172] (0xc0009766e0) (0xc000aea000) Stream added, broadcasting: 3\nI0429 00:35:35.670359 2330 log.go:172] (0xc0009766e0) Reply frame received for 3\nI0429 00:35:35.670407 2330 log.go:172] (0xc0009766e0) (0xc000aea0a0) Create stream\nI0429 00:35:35.670431 2330 log.go:172] (0xc0009766e0) (0xc000aea0a0) Stream added, broadcasting: 5\nI0429 00:35:35.671258 2330 log.go:172] (0xc0009766e0) Reply frame received for 5\nI0429 00:35:35.725871 2330 log.go:172] (0xc0009766e0) Data frame received for 5\nI0429 00:35:35.725902 2330 log.go:172] (0xc000aea0a0) (5) Data frame handling\nI0429 00:35:35.725916 2330 log.go:172] (0xc000aea0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:35:35.755797 2330 log.go:172] (0xc0009766e0) Data frame received for 3\nI0429 00:35:35.755832 2330 log.go:172] (0xc000aea000) (3) Data frame handling\nI0429 00:35:35.755859 2330 log.go:172] (0xc000aea000) (3) Data frame sent\nI0429 00:35:35.755898 2330 log.go:172] (0xc0009766e0) Data frame received for 5\nI0429 00:35:35.755914 2330 log.go:172] (0xc000aea0a0) (5) Data frame handling\nI0429 00:35:35.755934 2330 log.go:172] (0xc0009766e0) Data frame received for 3\nI0429 00:35:35.755944 2330 log.go:172] (0xc000aea000) (3) Data frame handling\nI0429 00:35:35.757940 2330 log.go:172] (0xc0009766e0) Data frame received for 1\nI0429 00:35:35.757971 2330 log.go:172] (0xc000404aa0) (1) Data frame handling\nI0429 00:35:35.757987 2330 log.go:172] (0xc000404aa0) (1) Data frame sent\nI0429 00:35:35.758005 2330 log.go:172] (0xc0009766e0) (0xc000404aa0) Stream removed, broadcasting: 1\nI0429 00:35:35.758221 2330 log.go:172] (0xc0009766e0) Go away received\nI0429 00:35:35.758475 2330 log.go:172] (0xc0009766e0) (0xc000404aa0) Stream removed, broadcasting: 1\nI0429 00:35:35.758503 2330 log.go:172] (0xc0009766e0) (0xc000aea000) Stream removed, broadcasting: 3\nI0429 00:35:35.758518 2330 log.go:172] (0xc0009766e0) (0xc000aea0a0) Stream removed, broadcasting: 5\n" Apr 29 00:35:35.763: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:35:35.763: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:35:35.766: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 29 00:35:45.772: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 00:35:45.772: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 00:35:45.790: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999576s Apr 29 00:35:46.795: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991638394s Apr 29 00:35:47.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986885675s Apr 29 00:35:48.803: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983202991s Apr 29 00:35:49.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978534277s Apr 29 00:35:50.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.974427545s Apr 29 00:35:51.816: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969831529s Apr 29 00:35:52.819: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.96651385s Apr 29 00:35:53.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.962943992s Apr 29 00:35:54.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 958.719994ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6015 Apr 29 00:35:55.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:35:56.076: INFO: stderr: "I0429 00:35:55.965774 2352 log.go:172] (0xc00097a2c0) (0xc000740280) Create stream\nI0429 00:35:55.965875 2352 log.go:172] (0xc00097a2c0) (0xc000740280) Stream added, broadcasting: 1\nI0429 00:35:55.969512 2352 log.go:172] (0xc00097a2c0) Reply frame received for 1\nI0429 00:35:55.969551 2352 log.go:172] (0xc00097a2c0) (0xc0005a1220) Create stream\nI0429 00:35:55.969560 2352 log.go:172] (0xc00097a2c0) (0xc0005a1220) Stream added, broadcasting: 3\nI0429 00:35:55.970428 2352 log.go:172] (0xc00097a2c0) Reply frame received for 3\nI0429 00:35:55.970489 2352 log.go:172] (0xc00097a2c0) (0xc00097e000) Create stream\nI0429 00:35:55.970515 2352 log.go:172] (0xc00097a2c0) (0xc00097e000) Stream added, broadcasting: 5\nI0429 00:35:55.971443 2352 log.go:172] (0xc00097a2c0) Reply frame received for 5\nI0429 00:35:56.070813 2352 log.go:172] (0xc00097a2c0) Data frame received for 5\nI0429 00:35:56.070872 2352 log.go:172] (0xc00097e000) (5) Data frame handling\nI0429 00:35:56.070897 2352 log.go:172] (0xc00097e000) (5) Data frame sent\nI0429 00:35:56.070914 2352 log.go:172] (0xc00097a2c0) Data frame received for 5\nI0429 00:35:56.070927 2352 log.go:172] (0xc00097e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 00:35:56.070979 2352 log.go:172] (0xc00097a2c0) Data frame received for 3\nI0429 00:35:56.071021 2352 log.go:172] (0xc0005a1220) (3) Data frame handling\nI0429 00:35:56.071053 2352 log.go:172] (0xc0005a1220) (3) Data frame sent\nI0429 00:35:56.071085 2352 log.go:172] (0xc00097a2c0) Data frame received for 3\nI0429 00:35:56.071097 2352 log.go:172] (0xc0005a1220) (3) Data frame handling\nI0429 00:35:56.072450 2352 log.go:172] (0xc00097a2c0) Data frame received for 1\nI0429 00:35:56.072472 2352 log.go:172] (0xc000740280) (1) Data frame handling\nI0429 00:35:56.072485 2352 log.go:172] (0xc000740280) (1) Data frame sent\nI0429 00:35:56.072500 2352 log.go:172] (0xc00097a2c0) (0xc000740280) Stream removed, broadcasting: 1\nI0429 00:35:56.072625 2352 log.go:172] (0xc00097a2c0) Go away received\nI0429 00:35:56.072860 2352 log.go:172] (0xc00097a2c0) (0xc000740280) Stream removed, broadcasting: 1\nI0429 00:35:56.072877 2352 log.go:172] (0xc00097a2c0) (0xc0005a1220) Stream removed, broadcasting: 3\nI0429 00:35:56.072885 2352 log.go:172] (0xc00097a2c0) (0xc00097e000) Stream removed, broadcasting: 5\n" Apr 29 00:35:56.076: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:35:56.076: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 00:35:56.080: INFO: Found 1 stateful pods, waiting for 3 Apr 29 00:36:06.086: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:36:06.086: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 00:36:06.086: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 29 00:36:06.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:36:06.322: INFO: stderr: "I0429 00:36:06.232450 2375 log.go:172] (0xc00076e160) (0xc000671220) Create stream\nI0429 00:36:06.232506 2375 log.go:172] (0xc00076e160) (0xc000671220) Stream added, broadcasting: 1\nI0429 00:36:06.235295 2375 log.go:172] (0xc00076e160) Reply frame received for 1\nI0429 00:36:06.235343 2375 log.go:172] (0xc00076e160) (0xc0006712c0) Create stream\nI0429 00:36:06.235359 2375 log.go:172] (0xc00076e160) (0xc0006712c0) Stream added, broadcasting: 3\nI0429 00:36:06.236262 2375 log.go:172] (0xc00076e160) Reply frame received for 3\nI0429 00:36:06.236306 2375 log.go:172] (0xc00076e160) (0xc00055eaa0) Create stream\nI0429 00:36:06.236321 2375 log.go:172] (0xc00076e160) (0xc00055eaa0) Stream added, broadcasting: 5\nI0429 00:36:06.237338 2375 log.go:172] (0xc00076e160) Reply frame received for 5\nI0429 00:36:06.314261 2375 log.go:172] (0xc00076e160) Data frame received for 5\nI0429 00:36:06.314289 2375 log.go:172] (0xc00055eaa0) (5) Data frame handling\nI0429 00:36:06.314301 2375 log.go:172] (0xc00055eaa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:36:06.314357 2375 log.go:172] (0xc00076e160) Data frame received for 3\nI0429 00:36:06.314406 2375 log.go:172] (0xc0006712c0) (3) Data frame handling\nI0429 00:36:06.314463 2375 log.go:172] (0xc0006712c0) (3) Data frame sent\nI0429 00:36:06.314502 2375 log.go:172] (0xc00076e160) Data frame received for 3\nI0429 00:36:06.314521 2375 log.go:172] (0xc0006712c0) (3) Data frame handling\nI0429 00:36:06.314676 2375 log.go:172] (0xc00076e160) Data frame received for 5\nI0429 00:36:06.314706 2375 log.go:172] (0xc00055eaa0) (5) Data frame handling\nI0429 00:36:06.316295 2375 log.go:172] (0xc00076e160) Data frame received for 1\nI0429 00:36:06.316320 2375 log.go:172] (0xc000671220) (1) Data frame handling\nI0429 00:36:06.316339 2375 log.go:172] (0xc000671220) (1) Data frame sent\nI0429 00:36:06.316359 2375 log.go:172] (0xc00076e160) (0xc000671220) Stream removed, broadcasting: 1\nI0429 00:36:06.316441 2375 log.go:172] (0xc00076e160) Go away received\nI0429 00:36:06.316812 2375 log.go:172] (0xc00076e160) (0xc000671220) Stream removed, broadcasting: 1\nI0429 00:36:06.316831 2375 log.go:172] (0xc00076e160) (0xc0006712c0) Stream removed, broadcasting: 3\nI0429 00:36:06.316843 2375 log.go:172] (0xc00076e160) (0xc00055eaa0) Stream removed, broadcasting: 5\n" Apr 29 00:36:06.322: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:36:06.322: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:36:06.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:36:06.535: INFO: stderr: "I0429 00:36:06.446504 2396 log.go:172] (0xc00003a420) (0xc0005a8140) Create stream\nI0429 00:36:06.446559 2396 log.go:172] (0xc00003a420) (0xc0005a8140) Stream added, broadcasting: 1\nI0429 00:36:06.449516 2396 log.go:172] (0xc00003a420) Reply frame received for 1\nI0429 00:36:06.449573 2396 log.go:172] (0xc00003a420) (0xc000900000) Create stream\nI0429 00:36:06.449589 2396 log.go:172] (0xc00003a420) (0xc000900000) Stream added, broadcasting: 3\nI0429 00:36:06.450618 2396 log.go:172] (0xc00003a420) Reply frame received for 3\nI0429 00:36:06.450663 2396 log.go:172] (0xc00003a420) (0xc0009000a0) Create stream\nI0429 00:36:06.450675 2396 log.go:172] (0xc00003a420) (0xc0009000a0) Stream added, broadcasting: 5\nI0429 00:36:06.452023 2396 log.go:172] (0xc00003a420) Reply frame received for 5\nI0429 00:36:06.500264 2396 log.go:172] (0xc00003a420) Data frame received for 5\nI0429 00:36:06.500323 2396 log.go:172] (0xc0009000a0) (5) Data frame handling\nI0429 00:36:06.500369 2396 log.go:172] (0xc0009000a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:36:06.527806 2396 log.go:172] (0xc00003a420) Data frame received for 3\nI0429 00:36:06.527857 2396 log.go:172] (0xc000900000) (3) Data frame handling\nI0429 00:36:06.527890 2396 log.go:172] (0xc000900000) (3) Data frame sent\nI0429 00:36:06.527939 2396 log.go:172] (0xc00003a420) Data frame received for 3\nI0429 00:36:06.527951 2396 log.go:172] (0xc000900000) (3) Data frame handling\nI0429 00:36:06.527984 2396 log.go:172] (0xc00003a420) Data frame received for 5\nI0429 00:36:06.528012 2396 log.go:172] (0xc0009000a0) (5) Data frame handling\nI0429 00:36:06.529410 2396 log.go:172] (0xc00003a420) Data frame received for 1\nI0429 00:36:06.529425 2396 log.go:172] (0xc0005a8140) (1) Data frame handling\nI0429 00:36:06.529434 2396 log.go:172] (0xc0005a8140) (1) Data frame sent\nI0429 00:36:06.529612 2396 log.go:172] (0xc00003a420) (0xc0005a8140) Stream removed, broadcasting: 1\nI0429 00:36:06.529665 2396 log.go:172] (0xc00003a420) Go away received\nI0429 00:36:06.530161 2396 log.go:172] (0xc00003a420) (0xc0005a8140) Stream removed, broadcasting: 1\nI0429 00:36:06.530184 2396 log.go:172] (0xc00003a420) (0xc000900000) Stream removed, broadcasting: 3\nI0429 00:36:06.530197 2396 log.go:172] (0xc00003a420) (0xc0009000a0) Stream removed, broadcasting: 5\n" Apr 29 00:36:06.535: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:36:06.535: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:36:06.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 00:36:06.776: INFO: stderr: "I0429 00:36:06.676925 2417 log.go:172] (0xc00003bb80) (0xc000686320) Create stream\nI0429 00:36:06.676976 2417 log.go:172] (0xc00003bb80) (0xc000686320) Stream added, broadcasting: 1\nI0429 00:36:06.680066 2417 log.go:172] (0xc00003bb80) Reply frame received for 1\nI0429 00:36:06.680102 2417 log.go:172] (0xc00003bb80) (0xc0003cb180) Create stream\nI0429 00:36:06.680111 2417 log.go:172] (0xc00003bb80) (0xc0003cb180) Stream added, broadcasting: 3\nI0429 00:36:06.681065 2417 log.go:172] (0xc00003bb80) Reply frame received for 3\nI0429 00:36:06.681095 2417 log.go:172] (0xc00003bb80) (0xc000376000) Create stream\nI0429 00:36:06.681105 2417 log.go:172] (0xc00003bb80) (0xc000376000) Stream added, broadcasting: 5\nI0429 00:36:06.682014 2417 log.go:172] (0xc00003bb80) Reply frame received for 5\nI0429 00:36:06.745386 2417 log.go:172] (0xc00003bb80) Data frame received for 5\nI0429 00:36:06.745427 2417 log.go:172] (0xc000376000) (5) Data frame handling\nI0429 00:36:06.745446 2417 log.go:172] (0xc000376000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 00:36:06.768040 2417 log.go:172] (0xc00003bb80) Data frame received for 3\nI0429 00:36:06.768076 2417 log.go:172] (0xc0003cb180) (3) Data frame handling\nI0429 00:36:06.768098 2417 log.go:172] (0xc0003cb180) (3) Data frame sent\nI0429 00:36:06.768292 2417 log.go:172] (0xc00003bb80) Data frame received for 3\nI0429 00:36:06.768309 2417 log.go:172] (0xc0003cb180) (3) Data frame handling\nI0429 00:36:06.768321 2417 log.go:172] (0xc00003bb80) Data frame received for 5\nI0429 00:36:06.768329 2417 log.go:172] (0xc000376000) (5) Data frame handling\nI0429 00:36:06.770171 2417 log.go:172] (0xc00003bb80) Data frame received for 1\nI0429 00:36:06.770217 2417 log.go:172] (0xc000686320) (1) Data frame handling\nI0429 00:36:06.770240 2417 log.go:172] (0xc000686320) (1) Data frame sent\nI0429 00:36:06.770391 2417 log.go:172] (0xc00003bb80) (0xc000686320) Stream removed, broadcasting: 1\nI0429 00:36:06.770538 2417 log.go:172] (0xc00003bb80) Go away received\nI0429 00:36:06.770671 2417 log.go:172] (0xc00003bb80) (0xc000686320) Stream removed, broadcasting: 1\nI0429 00:36:06.770690 2417 log.go:172] (0xc00003bb80) (0xc0003cb180) Stream removed, broadcasting: 3\nI0429 00:36:06.770698 2417 log.go:172] (0xc00003bb80) (0xc000376000) Stream removed, broadcasting: 5\n" Apr 29 00:36:06.776: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 00:36:06.776: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 00:36:06.776: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 00:36:06.794: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 29 00:36:16.803: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 00:36:16.803: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 29 00:36:16.803: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 29 00:36:16.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999955s Apr 29 00:36:17.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994011872s Apr 29 00:36:18.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988962711s Apr 29 00:36:19.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983706619s Apr 29 00:36:20.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978290867s Apr 29 00:36:21.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973015232s Apr 29 00:36:22.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967697181s Apr 29 00:36:23.852: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962592562s Apr 29 00:36:24.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957200982s Apr 29 00:36:25.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.019064ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6015 Apr 29 00:36:26.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:36:27.085: INFO: stderr: "I0429 00:36:27.000355 2439 log.go:172] (0xc0009cc000) (0xc000ad6000) Create stream\nI0429 00:36:27.000446 2439 log.go:172] (0xc0009cc000) (0xc000ad6000) Stream added, broadcasting: 1\nI0429 00:36:27.004406 2439 log.go:172] (0xc0009cc000) Reply frame received for 1\nI0429 00:36:27.004472 2439 log.go:172] (0xc0009cc000) (0xc000984000) Create stream\nI0429 00:36:27.004501 2439 log.go:172] (0xc0009cc000) (0xc000984000) Stream added, broadcasting: 3\nI0429 00:36:27.005848 2439 log.go:172] (0xc0009cc000) Reply frame received for 3\nI0429 00:36:27.005889 2439 log.go:172] (0xc0009cc000) (0xc00060b2c0) Create stream\nI0429 00:36:27.005903 2439 log.go:172] (0xc0009cc000) (0xc00060b2c0) Stream added, broadcasting: 5\nI0429 00:36:27.007107 2439 log.go:172] (0xc0009cc000) Reply frame received for 5\nI0429 00:36:27.077420 2439 log.go:172] (0xc0009cc000) Data frame received for 3\nI0429 00:36:27.077448 2439 log.go:172] (0xc000984000) (3) Data frame handling\nI0429 00:36:27.077456 2439 log.go:172] (0xc000984000) (3) Data frame sent\nI0429 00:36:27.077474 2439 log.go:172] (0xc0009cc000) Data frame received for 5\nI0429 00:36:27.077480 2439 log.go:172] (0xc00060b2c0) (5) Data frame handling\nI0429 00:36:27.077495 2439 log.go:172] (0xc00060b2c0) (5) Data frame sent\nI0429 00:36:27.077505 2439 log.go:172] (0xc0009cc000) Data frame received for 5\nI0429 00:36:27.077512 2439 log.go:172] (0xc00060b2c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 00:36:27.077603 2439 log.go:172] (0xc0009cc000) Data frame received for 3\nI0429 00:36:27.077625 2439 log.go:172] (0xc000984000) (3) Data frame handling\nI0429 00:36:27.079628 2439 log.go:172] (0xc0009cc000) Data frame received for 1\nI0429 00:36:27.079651 2439 log.go:172] (0xc000ad6000) (1) Data frame handling\nI0429 00:36:27.079682 2439 log.go:172] (0xc000ad6000) (1) Data frame sent\nI0429 00:36:27.079699 2439 log.go:172] (0xc0009cc000) (0xc000ad6000) Stream removed, broadcasting: 1\nI0429 00:36:27.079716 2439 log.go:172] (0xc0009cc000) Go away received\nI0429 00:36:27.080097 2439 log.go:172] (0xc0009cc000) (0xc000ad6000) Stream removed, broadcasting: 1\nI0429 00:36:27.080120 2439 log.go:172] (0xc0009cc000) (0xc000984000) Stream removed, broadcasting: 3\nI0429 00:36:27.080132 2439 log.go:172] (0xc0009cc000) (0xc00060b2c0) Stream removed, broadcasting: 5\n" Apr 29 00:36:27.086: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:36:27.086: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 00:36:27.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:36:27.325: INFO: stderr: "I0429 00:36:27.244752 2462 log.go:172] (0xc0000e9b80) (0xc00094a000) Create stream\nI0429 00:36:27.244820 2462 log.go:172] (0xc0000e9b80) (0xc00094a000) Stream added, broadcasting: 1\nI0429 00:36:27.247814 2462 log.go:172] (0xc0000e9b80) Reply frame received for 1\nI0429 00:36:27.247843 2462 log.go:172] (0xc0000e9b80) (0xc00094a0a0) Create stream\nI0429 00:36:27.247853 2462 log.go:172] (0xc0000e9b80) (0xc00094a0a0) Stream added, broadcasting: 3\nI0429 00:36:27.249431 2462 log.go:172] (0xc0000e9b80) Reply frame received for 3\nI0429 00:36:27.249452 2462 log.go:172] (0xc0000e9b80) (0xc00094a140) Create stream\nI0429 00:36:27.249460 2462 log.go:172] (0xc0000e9b80) (0xc00094a140) Stream added, broadcasting: 5\nI0429 00:36:27.250337 2462 log.go:172] (0xc0000e9b80) Reply frame received for 5\nI0429 00:36:27.316729 2462 log.go:172] (0xc0000e9b80) Data frame received for 5\nI0429 00:36:27.316789 2462 log.go:172] (0xc00094a140) (5) Data frame handling\nI0429 00:36:27.316811 2462 log.go:172] (0xc00094a140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 00:36:27.316877 2462 log.go:172] (0xc0000e9b80) Data frame received for 3\nI0429 00:36:27.316894 2462 log.go:172] (0xc00094a0a0) (3) Data frame handling\nI0429 00:36:27.316906 2462 log.go:172] (0xc00094a0a0) (3) Data frame sent\nI0429 00:36:27.316918 2462 log.go:172] (0xc0000e9b80) Data frame received for 3\nI0429 00:36:27.316927 2462 log.go:172] (0xc00094a0a0) (3) Data frame handling\nI0429 00:36:27.316938 2462 log.go:172] (0xc0000e9b80) Data frame received for 5\nI0429 00:36:27.316946 2462 log.go:172] (0xc00094a140) (5) Data frame handling\nI0429 00:36:27.318572 2462 log.go:172] (0xc0000e9b80) Data frame received for 1\nI0429 00:36:27.318594 2462 log.go:172] (0xc00094a000) (1) Data frame handling\nI0429 00:36:27.318619 2462 log.go:172] (0xc00094a000) (1) Data frame sent\nI0429 00:36:27.318641 2462 log.go:172] (0xc0000e9b80) (0xc00094a000) Stream removed, broadcasting: 1\nI0429 00:36:27.318762 2462 log.go:172] (0xc0000e9b80) Go away received\nI0429 00:36:27.320127 2462 log.go:172] (0xc0000e9b80) (0xc00094a000) Stream removed, broadcasting: 1\nI0429 00:36:27.320158 2462 log.go:172] (0xc0000e9b80) (0xc00094a0a0) Stream removed, broadcasting: 3\nI0429 00:36:27.320176 2462 log.go:172] (0xc0000e9b80) (0xc00094a140) Stream removed, broadcasting: 5\n" Apr 29 00:36:27.325: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:36:27.325: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 00:36:27.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6015 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 00:36:27.527: INFO: stderr: "I0429 00:36:27.451278 2481 log.go:172] (0xc0004d0000) (0xc000833220) Create stream\nI0429 00:36:27.451325 2481 log.go:172] (0xc0004d0000) (0xc000833220) Stream added, broadcasting: 1\nI0429 00:36:27.454145 2481 log.go:172] (0xc0004d0000) Reply frame received for 1\nI0429 00:36:27.454185 2481 log.go:172] (0xc0004d0000) (0xc0009d6000) Create stream\nI0429 00:36:27.454203 2481 log.go:172] (0xc0004d0000) (0xc0009d6000) Stream added, broadcasting: 3\nI0429 00:36:27.455232 2481 log.go:172] (0xc0004d0000) Reply frame received for 3\nI0429 00:36:27.455259 2481 log.go:172] (0xc0004d0000) (0xc0009d60a0) Create stream\nI0429 00:36:27.455267 2481 log.go:172] (0xc0004d0000) (0xc0009d60a0) Stream added, broadcasting: 5\nI0429 00:36:27.456075 2481 log.go:172] (0xc0004d0000) Reply frame received for 5\nI0429 00:36:27.510072 2481 log.go:172] (0xc0004d0000) Data frame received for 5\nI0429 00:36:27.510101 2481 log.go:172] (0xc0009d60a0) (5) Data frame handling\nI0429 00:36:27.510121 2481 log.go:172] (0xc0009d60a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 00:36:27.522537 2481 log.go:172] (0xc0004d0000) Data frame received for 3\nI0429 00:36:27.522560 2481 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0429 00:36:27.522581 2481 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0429 00:36:27.522594 2481 log.go:172] (0xc0004d0000) Data frame received for 3\nI0429 00:36:27.522603 2481 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0429 00:36:27.522721 2481 log.go:172] (0xc0004d0000) Data frame received for 5\nI0429 00:36:27.522736 2481 log.go:172] (0xc0009d60a0) (5) Data frame handling\nI0429 00:36:27.523679 2481 log.go:172] (0xc0004d0000) Data frame received for 1\nI0429 00:36:27.523695 2481 log.go:172] (0xc000833220) (1) Data frame handling\nI0429 00:36:27.523703 2481 log.go:172] (0xc000833220) (1) Data frame sent\nI0429 00:36:27.523714 2481 log.go:172] (0xc0004d0000) (0xc000833220) Stream removed, broadcasting: 1\nI0429 00:36:27.523763 2481 log.go:172] (0xc0004d0000) Go away received\nI0429 00:36:27.523996 2481 log.go:172] (0xc0004d0000) (0xc000833220) Stream removed, broadcasting: 1\nI0429 00:36:27.524014 2481 log.go:172] (0xc0004d0000) (0xc0009d6000) Stream removed, broadcasting: 3\nI0429 00:36:27.524025 2481 log.go:172] (0xc0004d0000) (0xc0009d60a0) Stream removed, broadcasting: 5\n" Apr 29 00:36:27.528: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 00:36:27.528: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 00:36:27.528: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 29 00:36:47.542: INFO: Deleting all statefulset in ns statefulset-6015 Apr 29 00:36:47.545: INFO: Scaling statefulset ss to 0 Apr 29 00:36:47.554: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 00:36:47.556: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:36:47.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6015" for this suite. • [SLOW TEST:82.182 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":211,"skipped":3517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:36:47.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:36:47.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1198" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":212,"skipped":3554,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:36:47.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:00.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2433" for this suite. • [SLOW TEST:13.159 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":213,"skipped":3573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:00.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:37:01.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:37:03.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717421, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717421, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717421, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717421, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:37:06.460: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:37:06.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4913-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:07.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5501" for this suite. STEP: Destroying namespace "webhook-5501-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.721 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":214,"skipped":3675,"failed":0} SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:07.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:37:07.719: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:11.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6392" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:11.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:37:11.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9c39f56-73b7-4c73-acea-ca540e577973" in namespace "projected-8801" to be "Succeeded or Failed" Apr 29 00:37:11.888: INFO: Pod "downwardapi-volume-b9c39f56-73b7-4c73-acea-ca540e577973": Phase="Pending", Reason="", readiness=false. Elapsed: 11.863315ms Apr 29 00:37:13.893: INFO: Pod "downwardapi-volume-b9c39f56-73b7-4c73-acea-ca540e577973": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016519811s Apr 29 00:37:15.897: INFO: Pod "downwardapi-volume-b9c39f56-73b7-4c73-acea-ca540e577973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020819445s STEP: Saw pod success Apr 29 00:37:15.897: INFO: Pod "downwardapi-volume-b9c39f56-73b7-4c73-acea-ca540e577973" satisfied condition "Succeeded or Failed" Apr 29 00:37:15.900: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b9c39f56-73b7-4c73-acea-ca540e577973 container client-container: STEP: delete the pod Apr 29 00:37:15.959: INFO: Waiting for pod downwardapi-volume-b9c39f56-73b7-4c73-acea-ca540e577973 to disappear Apr 29 00:37:15.978: INFO: Pod downwardapi-volume-b9c39f56-73b7-4c73-acea-ca540e577973 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:15.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8801" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3745,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:15.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:37:16.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b8d6702-680a-43b8-828f-826204920113" in namespace "projected-4368" to be "Succeeded or Failed" Apr 29 00:37:16.089: INFO: Pod "downwardapi-volume-1b8d6702-680a-43b8-828f-826204920113": Phase="Pending", Reason="", readiness=false. Elapsed: 20.040045ms Apr 29 00:37:18.094: INFO: Pod "downwardapi-volume-1b8d6702-680a-43b8-828f-826204920113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024445614s Apr 29 00:37:20.098: INFO: Pod "downwardapi-volume-1b8d6702-680a-43b8-828f-826204920113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028508179s STEP: Saw pod success Apr 29 00:37:20.098: INFO: Pod "downwardapi-volume-1b8d6702-680a-43b8-828f-826204920113" satisfied condition "Succeeded or Failed" Apr 29 00:37:20.100: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1b8d6702-680a-43b8-828f-826204920113 container client-container: STEP: delete the pod Apr 29 00:37:20.116: INFO: Waiting for pod downwardapi-volume-1b8d6702-680a-43b8-828f-826204920113 to disappear Apr 29 00:37:20.121: INFO: Pod downwardapi-volume-1b8d6702-680a-43b8-828f-826204920113 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:20.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4368" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3762,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:20.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8790.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8790.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 00:37:26.254: INFO: DNS probes using dns-8790/dns-test-d55f207d-519b-4a76-9464-6075dd96861f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:26.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8790" for this suite. • [SLOW TEST:6.199 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":218,"skipped":3773,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:26.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:37:27.088: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:37:29.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717447, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717447, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:37:31.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717447, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717447, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717447, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:37:34.566: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:44.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6398" for this suite. STEP: Destroying namespace "webhook-6398-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.450 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":219,"skipped":3773,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:44.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 29 00:37:44.843: INFO: namespace kubectl-7569 Apr 29 00:37:44.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7569' Apr 29 00:37:45.166: INFO: stderr: "" Apr 29 00:37:45.167: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 29 00:37:46.171: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:37:46.171: INFO: Found 0 / 1 Apr 29 00:37:47.192: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:37:47.192: INFO: Found 0 / 1 Apr 29 00:37:48.171: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:37:48.171: INFO: Found 1 / 1 Apr 29 00:37:48.171: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 29 00:37:48.174: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:37:48.174: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 29 00:37:48.174: INFO: wait on agnhost-master startup in kubectl-7569 Apr 29 00:37:48.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-q2jbt agnhost-master --namespace=kubectl-7569' Apr 29 00:37:48.293: INFO: stderr: "" Apr 29 00:37:48.293: INFO: stdout: "Paused\n" STEP: exposing RC Apr 29 00:37:48.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7569' Apr 29 00:37:48.473: INFO: stderr: "" Apr 29 00:37:48.473: INFO: stdout: "service/rm2 exposed\n" Apr 29 00:37:48.482: INFO: Service rm2 in namespace kubectl-7569 found. STEP: exposing service Apr 29 00:37:50.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7569' Apr 29 00:37:50.627: INFO: stderr: "" Apr 29 00:37:50.627: INFO: stdout: "service/rm3 exposed\n" Apr 29 00:37:50.646: INFO: Service rm3 in namespace kubectl-7569 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:52.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7569" for this suite. • [SLOW TEST:7.885 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":220,"skipped":3774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:52.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 29 00:37:52.708: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 29 00:37:53.462: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 29 00:37:55.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717473, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717473, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717473, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717473, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:37:58.426: INFO: Waited 535.50571ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:37:59.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5886" for this suite. • [SLOW TEST:6.620 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":221,"skipped":3800,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:37:59.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 29 00:37:59.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5531' Apr 29 00:37:59.852: INFO: stderr: "" Apr 29 00:37:59.852: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 29 00:38:00.856: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:38:00.856: INFO: Found 0 / 1 Apr 29 00:38:01.856: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:38:01.856: INFO: Found 0 / 1 Apr 29 00:38:02.856: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:38:02.856: INFO: Found 0 / 1 Apr 29 00:38:03.874: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:38:03.874: INFO: Found 1 / 1 Apr 29 00:38:03.874: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 29 00:38:03.877: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:38:03.877: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 29 00:38:03.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-xclgg --namespace=kubectl-5531 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 29 00:38:03.976: INFO: stderr: "" Apr 29 00:38:03.976: INFO: stdout: "pod/agnhost-master-xclgg patched\n" STEP: checking annotations Apr 29 00:38:03.993: INFO: Selector matched 1 pods for map[app:agnhost] Apr 29 00:38:03.993: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:38:03.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5531" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":222,"skipped":3803,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:38:04.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-67fa7631-453d-44c0-a3ee-7fd9f83b7f0c STEP: Creating a pod to test consume configMaps Apr 29 00:38:04.076: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5370bf30-83b5-416d-88dc-8e2ac778a70a" in namespace "projected-5091" to be "Succeeded or Failed" Apr 29 00:38:04.122: INFO: Pod "pod-projected-configmaps-5370bf30-83b5-416d-88dc-8e2ac778a70a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.517583ms Apr 29 00:38:06.125: INFO: Pod "pod-projected-configmaps-5370bf30-83b5-416d-88dc-8e2ac778a70a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049053586s Apr 29 00:38:08.129: INFO: Pod "pod-projected-configmaps-5370bf30-83b5-416d-88dc-8e2ac778a70a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053086196s STEP: Saw pod success Apr 29 00:38:08.129: INFO: Pod "pod-projected-configmaps-5370bf30-83b5-416d-88dc-8e2ac778a70a" satisfied condition "Succeeded or Failed" Apr 29 00:38:08.132: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-5370bf30-83b5-416d-88dc-8e2ac778a70a container projected-configmap-volume-test: STEP: delete the pod Apr 29 00:38:08.164: INFO: Waiting for pod pod-projected-configmaps-5370bf30-83b5-416d-88dc-8e2ac778a70a to disappear Apr 29 00:38:08.176: INFO: Pod pod-projected-configmaps-5370bf30-83b5-416d-88dc-8e2ac778a70a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:38:08.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5091" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3814,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:38:08.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:38:08.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9842" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":224,"skipped":3822,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:38:08.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 29 00:38:13.487: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:38:13.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2591" for this suite. • [SLOW TEST:5.239 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":225,"skipped":3823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:38:13.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:38:13.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712" in namespace "projected-9482" to be "Succeeded or Failed" Apr 29 00:38:13.680: INFO: Pod "downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83032ms Apr 29 00:38:15.791: INFO: Pod "downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114745846s Apr 29 00:38:17.796: INFO: Pod "downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712": Phase="Running", Reason="", readiness=true. Elapsed: 4.119487884s Apr 29 00:38:19.800: INFO: Pod "downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123946925s STEP: Saw pod success Apr 29 00:38:19.800: INFO: Pod "downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712" satisfied condition "Succeeded or Failed" Apr 29 00:38:19.804: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712 container client-container: STEP: delete the pod Apr 29 00:38:19.881: INFO: Waiting for pod downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712 to disappear Apr 29 00:38:19.889: INFO: Pod downwardapi-volume-3b57be69-9599-464b-94a7-13ca61f31712 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:38:19.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9482" for this suite. • [SLOW TEST:6.328 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3883,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:38:19.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:38:20.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:38:22.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717500, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717500, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717500, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717500, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:38:25.801: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:38:26.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1656" for this suite. STEP: Destroying namespace "webhook-1656-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.321 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":227,"skipped":3893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:38:26.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-5a68359e-e01a-4ad4-bf40-728c51775c8a STEP: Creating configMap with name cm-test-opt-upd-bcb3f86a-7c6a-4d0b-b154-d158c6593bb1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5a68359e-e01a-4ad4-bf40-728c51775c8a STEP: Updating configmap cm-test-opt-upd-bcb3f86a-7c6a-4d0b-b154-d158c6593bb1 STEP: Creating configMap with name cm-test-opt-create-443fc2a0-6fe1-4678-abf6-de0986cd2aa3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:39:49.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7690" for this suite. • [SLOW TEST:83.013 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3941,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:39:49.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:39:49.311: INFO: Creating deployment "test-recreate-deployment" Apr 29 00:39:49.316: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 29 00:39:49.346: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 29 00:39:51.355: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 29 00:39:51.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717589, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717589, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717589, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717589, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 00:39:53.418: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 29 00:39:53.424: INFO: Updating deployment test-recreate-deployment Apr 29 00:39:53.424: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 29 00:39:53.837: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3859 /apis/apps/v1/namespaces/deployment-3859/deployments/test-recreate-deployment e254f0a8-582a-4947-bb3e-df33e81c4860 11858315 2 2020-04-29 00:39:49 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003041138 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-29 00:39:53 +0000 UTC,LastTransitionTime:2020-04-29 00:39:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-29 00:39:53 +0000 UTC,LastTransitionTime:2020-04-29 00:39:49 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 29 00:39:54.008: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3859 /apis/apps/v1/namespaces/deployment-3859/replicasets/test-recreate-deployment-5f94c574ff 4cabe644-919b-4f9f-ad82-79e98dc1907f 11858313 1 2020-04-29 00:39:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment e254f0a8-582a-4947-bb3e-df33e81c4860 0xc003041537 0xc003041538}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003041598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 00:39:54.008: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 29 00:39:54.008: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-3859 /apis/apps/v1/namespaces/deployment-3859/replicasets/test-recreate-deployment-846c7dd955 184a6677-8f53-49ae-b8d5-270218fcff0d 11858304 2 2020-04-29 00:39:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment e254f0a8-582a-4947-bb3e-df33e81c4860 0xc003041607 0xc003041608}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003041678 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 29 00:39:54.011: INFO: Pod "test-recreate-deployment-5f94c574ff-8qbwt" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-8qbwt test-recreate-deployment-5f94c574ff- deployment-3859 /api/v1/namespaces/deployment-3859/pods/test-recreate-deployment-5f94c574ff-8qbwt 00ac62e1-ab73-4f02-9d3a-44c02b0be000 11858316 0 2020-04-29 00:39:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 4cabe644-919b-4f9f-ad82-79e98dc1907f 0xc003041b47 0xc003041b48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5jgt2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5jgt2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5jgt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:39:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:39:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:39:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 00:39:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-29 00:39:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:39:54.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3859" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":229,"skipped":3956,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:39:54.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:39:54.144: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"bb82d286-90e5-43f5-9de9-ec1f783638e0", Controller:(*bool)(0xc00474745a), BlockOwnerDeletion:(*bool)(0xc00474745b)}} Apr 29 00:39:54.179: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"617c8fe4-71a7-42af-bf9f-c2afa10d92f0", Controller:(*bool)(0xc0050fb55a), BlockOwnerDeletion:(*bool)(0xc0050fb55b)}} Apr 29 00:39:54.215: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7f550f90-30fa-446d-86b9-ebcea43004ef", Controller:(*bool)(0xc00474763a), BlockOwnerDeletion:(*bool)(0xc00474763b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:39:59.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7630" for this suite. • [SLOW TEST:5.602 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":230,"skipped":3994,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:39:59.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8912.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8912.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8912.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8912.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8912.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8912.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 00:40:09.800: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:09.803: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:09.807: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:09.810: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:09.819: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:09.822: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:09.825: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:09.828: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:09.836: INFO: Lookups using dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local] Apr 29 00:40:14.840: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:14.844: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:14.848: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:14.851: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:14.863: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:14.867: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:14.869: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:14.871: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:14.875: INFO: Lookups using dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local] Apr 29 00:40:19.841: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:19.845: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:19.848: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:19.851: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:19.860: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:19.863: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:19.865: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:19.868: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:19.875: INFO: Lookups using dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local] Apr 29 00:40:24.839: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:24.842: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:24.845: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:24.847: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:24.855: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:24.858: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:24.860: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:24.863: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:24.868: INFO: Lookups using dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local] Apr 29 00:40:29.841: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:29.845: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:29.848: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:29.850: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:29.859: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:29.861: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:29.864: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:29.866: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:29.871: INFO: Lookups using dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8912.svc.cluster.local] Apr 29 00:40:34.848: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:34.884: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local from pod dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74: the server could not find the requested resource (get pods dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74) Apr 29 00:40:34.889: INFO: Lookups using dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74 failed for: [wheezy_udp@dns-test-service-2.dns-8912.svc.cluster.local jessie_udp@dns-test-service-2.dns-8912.svc.cluster.local] Apr 29 00:40:39.870: INFO: DNS probes using dns-8912/dns-test-cd10a927-a6c0-414f-9fb1-875ddb4b6c74 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:40:39.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8912" for this suite. • [SLOW TEST:40.703 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":231,"skipped":4004,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:40:40.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:40:40.416: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 29 00:40:43.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6127 create -f -' Apr 29 00:40:43.890: INFO: stderr: "" Apr 29 00:40:43.890: INFO: stdout: "e2e-test-crd-publish-openapi-4242-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 29 00:40:43.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6127 delete e2e-test-crd-publish-openapi-4242-crds test-cr' Apr 29 00:40:43.979: INFO: stderr: "" Apr 29 00:40:43.979: INFO: stdout: "e2e-test-crd-publish-openapi-4242-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 29 00:40:43.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6127 apply -f -' Apr 29 00:40:44.239: INFO: stderr: "" Apr 29 00:40:44.239: INFO: stdout: "e2e-test-crd-publish-openapi-4242-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 29 00:40:44.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6127 delete e2e-test-crd-publish-openapi-4242-crds test-cr' Apr 29 00:40:44.340: INFO: stderr: "" Apr 29 00:40:44.340: INFO: stdout: "e2e-test-crd-publish-openapi-4242-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 29 00:40:44.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4242-crds' Apr 29 00:40:44.656: INFO: stderr: "" Apr 29 00:40:44.656: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4242-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:40:47.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6127" for this suite. • [SLOW TEST:7.272 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":232,"skipped":4012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:40:47.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:40:51.774: INFO: Waiting up to 5m0s for pod "client-envvars-17b7ce1b-4cf4-4f1a-9bc0-323f25924d5d" in namespace "pods-5846" to be "Succeeded or Failed" Apr 29 00:40:51.780: INFO: Pod "client-envvars-17b7ce1b-4cf4-4f1a-9bc0-323f25924d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.921197ms Apr 29 00:40:53.784: INFO: Pod "client-envvars-17b7ce1b-4cf4-4f1a-9bc0-323f25924d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010128352s Apr 29 00:40:55.789: INFO: Pod "client-envvars-17b7ce1b-4cf4-4f1a-9bc0-323f25924d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014681909s STEP: Saw pod success Apr 29 00:40:55.789: INFO: Pod "client-envvars-17b7ce1b-4cf4-4f1a-9bc0-323f25924d5d" satisfied condition "Succeeded or Failed" Apr 29 00:40:55.792: INFO: Trying to get logs from node latest-worker pod client-envvars-17b7ce1b-4cf4-4f1a-9bc0-323f25924d5d container env3cont: STEP: delete the pod Apr 29 00:40:55.847: INFO: Waiting for pod client-envvars-17b7ce1b-4cf4-4f1a-9bc0-323f25924d5d to disappear Apr 29 00:40:55.852: INFO: Pod client-envvars-17b7ce1b-4cf4-4f1a-9bc0-323f25924d5d no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:40:55.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5846" for this suite. • [SLOW TEST:8.258 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4047,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:40:55.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 29 00:40:55.938: INFO: Waiting up to 5m0s for pod "pod-da81a44f-bf76-4f3d-80ef-39c6a6596219" in namespace "emptydir-5404" to be "Succeeded or Failed" Apr 29 00:40:55.942: INFO: Pod "pod-da81a44f-bf76-4f3d-80ef-39c6a6596219": Phase="Pending", Reason="", readiness=false. Elapsed: 3.440243ms Apr 29 00:40:57.946: INFO: Pod "pod-da81a44f-bf76-4f3d-80ef-39c6a6596219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00748106s Apr 29 00:40:59.950: INFO: Pod "pod-da81a44f-bf76-4f3d-80ef-39c6a6596219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011634347s STEP: Saw pod success Apr 29 00:40:59.950: INFO: Pod "pod-da81a44f-bf76-4f3d-80ef-39c6a6596219" satisfied condition "Succeeded or Failed" Apr 29 00:40:59.953: INFO: Trying to get logs from node latest-worker2 pod pod-da81a44f-bf76-4f3d-80ef-39c6a6596219 container test-container: STEP: delete the pod Apr 29 00:40:59.985: INFO: Waiting for pod pod-da81a44f-bf76-4f3d-80ef-39c6a6596219 to disappear Apr 29 00:40:59.990: INFO: Pod pod-da81a44f-bf76-4f3d-80ef-39c6a6596219 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:40:59.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5404" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4050,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:40:59.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 29 00:41:04.582: INFO: Successfully updated pod "pod-update-activedeadlineseconds-42d871a1-7914-4911-be8f-5b84e1e68577" Apr 29 00:41:04.582: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-42d871a1-7914-4911-be8f-5b84e1e68577" in namespace "pods-185" to be "terminated due to deadline exceeded" Apr 29 00:41:04.599: INFO: Pod "pod-update-activedeadlineseconds-42d871a1-7914-4911-be8f-5b84e1e68577": Phase="Running", Reason="", readiness=true. Elapsed: 17.475915ms Apr 29 00:41:06.630: INFO: Pod "pod-update-activedeadlineseconds-42d871a1-7914-4911-be8f-5b84e1e68577": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.048698667s Apr 29 00:41:06.630: INFO: Pod "pod-update-activedeadlineseconds-42d871a1-7914-4911-be8f-5b84e1e68577" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:41:06.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-185" for this suite. • [SLOW TEST:6.641 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4071,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:41:06.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:41:07.453: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:41:10.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717667, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717667, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:41:13.220: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 29 00:41:17.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-8694 to-be-attached-pod -i -c=container1' Apr 29 00:41:17.410: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:41:17.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8694" for this suite. STEP: Destroying namespace "webhook-8694-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.856 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":236,"skipped":4085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:41:17.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 00:41:21.603: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:41:21.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3181" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4128,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:41:21.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 29 00:41:29.787: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 00:41:29.806: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 00:41:31.806: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 00:41:31.811: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 00:41:33.806: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 00:41:33.811: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 00:41:35.806: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 00:41:35.811: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 00:41:37.806: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 00:41:37.810: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 00:41:39.806: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 00:41:39.811: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 00:41:41.806: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 00:41:41.811: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 00:41:43.806: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 00:41:43.811: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:41:43.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7328" for this suite. • [SLOW TEST:22.174 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4146,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:41:43.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5711 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5711 STEP: creating replication controller externalsvc in namespace services-5711 I0429 00:41:44.004060 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5711, replica count: 2 I0429 00:41:47.054514 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 00:41:50.054732 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 29 00:41:50.090: INFO: Creating new exec pod Apr 29 00:41:54.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5711 execpodc92f5 -- /bin/sh -x -c nslookup clusterip-service' Apr 29 00:41:54.364: INFO: stderr: "I0429 00:41:54.256583 2764 log.go:172] (0xc000940dc0) (0xc0009163c0) Create stream\nI0429 00:41:54.256632 2764 log.go:172] (0xc000940dc0) (0xc0009163c0) Stream added, broadcasting: 1\nI0429 00:41:54.260696 2764 log.go:172] (0xc000940dc0) Reply frame received for 1\nI0429 00:41:54.260746 2764 log.go:172] (0xc000940dc0) (0xc0003eca00) Create stream\nI0429 00:41:54.260770 2764 log.go:172] (0xc000940dc0) (0xc0003eca00) Stream added, broadcasting: 3\nI0429 00:41:54.261684 2764 log.go:172] (0xc000940dc0) Reply frame received for 3\nI0429 00:41:54.261713 2764 log.go:172] (0xc000940dc0) (0xc000916000) Create stream\nI0429 00:41:54.261721 2764 log.go:172] (0xc000940dc0) (0xc000916000) Stream added, broadcasting: 5\nI0429 00:41:54.262378 2764 log.go:172] (0xc000940dc0) Reply frame received for 5\nI0429 00:41:54.345408 2764 log.go:172] (0xc000940dc0) Data frame received for 5\nI0429 00:41:54.345446 2764 log.go:172] (0xc000916000) (5) Data frame handling\nI0429 00:41:54.345469 2764 log.go:172] (0xc000916000) (5) Data frame sent\n+ nslookup clusterip-service\nI0429 00:41:54.355332 2764 log.go:172] (0xc000940dc0) Data frame received for 3\nI0429 00:41:54.355369 2764 log.go:172] (0xc0003eca00) (3) Data frame handling\nI0429 00:41:54.355393 2764 log.go:172] (0xc0003eca00) (3) Data frame sent\nI0429 00:41:54.356543 2764 log.go:172] (0xc000940dc0) Data frame received for 3\nI0429 00:41:54.356564 2764 log.go:172] (0xc0003eca00) (3) Data frame handling\nI0429 00:41:54.356589 2764 log.go:172] (0xc0003eca00) (3) Data frame sent\nI0429 00:41:54.357034 2764 log.go:172] (0xc000940dc0) Data frame received for 5\nI0429 00:41:54.357065 2764 log.go:172] (0xc000916000) (5) Data frame handling\nI0429 00:41:54.357332 2764 log.go:172] (0xc000940dc0) Data frame received for 3\nI0429 00:41:54.357351 2764 log.go:172] (0xc0003eca00) (3) Data frame handling\nI0429 00:41:54.359626 2764 log.go:172] (0xc000940dc0) Data frame received for 1\nI0429 00:41:54.359653 2764 log.go:172] (0xc0009163c0) (1) Data frame handling\nI0429 00:41:54.359667 2764 log.go:172] (0xc0009163c0) (1) Data frame sent\nI0429 00:41:54.359721 2764 log.go:172] (0xc000940dc0) (0xc0009163c0) Stream removed, broadcasting: 1\nI0429 00:41:54.359752 2764 log.go:172] (0xc000940dc0) Go away received\nI0429 00:41:54.360214 2764 log.go:172] (0xc000940dc0) (0xc0009163c0) Stream removed, broadcasting: 1\nI0429 00:41:54.360238 2764 log.go:172] (0xc000940dc0) (0xc0003eca00) Stream removed, broadcasting: 3\nI0429 00:41:54.360250 2764 log.go:172] (0xc000940dc0) (0xc000916000) Stream removed, broadcasting: 5\n" Apr 29 00:41:54.365: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5711.svc.cluster.local\tcanonical name = externalsvc.services-5711.svc.cluster.local.\nName:\texternalsvc.services-5711.svc.cluster.local\nAddress: 10.96.118.131\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5711, will wait for the garbage collector to delete the pods Apr 29 00:41:54.425: INFO: Deleting ReplicationController externalsvc took: 6.647269ms Apr 29 00:41:54.725: INFO: Terminating ReplicationController externalsvc pods took: 300.41826ms Apr 29 00:42:02.867: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:42:02.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5711" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:19.104 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":239,"skipped":4160,"failed":0} [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:42:02.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 29 00:42:03.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-75' Apr 29 00:42:03.350: INFO: stderr: "" Apr 29 00:42:03.350: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 29 00:42:03.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-75' Apr 29 00:42:03.491: INFO: stderr: "" Apr 29 00:42:03.491: INFO: stdout: "update-demo-nautilus-9778p update-demo-nautilus-jddpj " Apr 29 00:42:03.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:03.585: INFO: stderr: "" Apr 29 00:42:03.585: INFO: stdout: "" Apr 29 00:42:03.585: INFO: update-demo-nautilus-9778p is created but not running Apr 29 00:42:08.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-75' Apr 29 00:42:09.322: INFO: stderr: "" Apr 29 00:42:09.322: INFO: stdout: "update-demo-nautilus-9778p update-demo-nautilus-jddpj " Apr 29 00:42:09.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:09.588: INFO: stderr: "" Apr 29 00:42:09.588: INFO: stdout: "true" Apr 29 00:42:09.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:09.830: INFO: stderr: "" Apr 29 00:42:09.830: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 29 00:42:09.830: INFO: validating pod update-demo-nautilus-9778p Apr 29 00:42:09.839: INFO: got data: { "image": "nautilus.jpg" } Apr 29 00:42:09.839: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 00:42:09.839: INFO: update-demo-nautilus-9778p is verified up and running Apr 29 00:42:09.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jddpj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:09.991: INFO: stderr: "" Apr 29 00:42:09.991: INFO: stdout: "true" Apr 29 00:42:09.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jddpj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:10.082: INFO: stderr: "" Apr 29 00:42:10.082: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 29 00:42:10.082: INFO: validating pod update-demo-nautilus-jddpj Apr 29 00:42:10.100: INFO: got data: { "image": "nautilus.jpg" } Apr 29 00:42:10.100: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 00:42:10.100: INFO: update-demo-nautilus-jddpj is verified up and running STEP: scaling down the replication controller Apr 29 00:42:10.117: INFO: scanned /root for discovery docs: Apr 29 00:42:10.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-75' Apr 29 00:42:11.505: INFO: stderr: "" Apr 29 00:42:11.505: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 29 00:42:11.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-75' Apr 29 00:42:11.616: INFO: stderr: "" Apr 29 00:42:11.616: INFO: stdout: "update-demo-nautilus-9778p update-demo-nautilus-jddpj " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 29 00:42:16.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-75' Apr 29 00:42:16.728: INFO: stderr: "" Apr 29 00:42:16.729: INFO: stdout: "update-demo-nautilus-9778p update-demo-nautilus-jddpj " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 29 00:42:21.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-75' Apr 29 00:42:21.830: INFO: stderr: "" Apr 29 00:42:21.830: INFO: stdout: "update-demo-nautilus-9778p update-demo-nautilus-jddpj " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 29 00:42:26.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-75' Apr 29 00:42:26.930: INFO: stderr: "" Apr 29 00:42:26.930: INFO: stdout: "update-demo-nautilus-9778p " Apr 29 00:42:26.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:27.034: INFO: stderr: "" Apr 29 00:42:27.034: INFO: stdout: "true" Apr 29 00:42:27.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:27.132: INFO: stderr: "" Apr 29 00:42:27.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 29 00:42:27.132: INFO: validating pod update-demo-nautilus-9778p Apr 29 00:42:27.135: INFO: got data: { "image": "nautilus.jpg" } Apr 29 00:42:27.135: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 00:42:27.135: INFO: update-demo-nautilus-9778p is verified up and running STEP: scaling up the replication controller Apr 29 00:42:27.139: INFO: scanned /root for discovery docs: Apr 29 00:42:27.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-75' Apr 29 00:42:28.374: INFO: stderr: "" Apr 29 00:42:28.374: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 29 00:42:28.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-75' Apr 29 00:42:28.470: INFO: stderr: "" Apr 29 00:42:28.470: INFO: stdout: "update-demo-nautilus-9778p update-demo-nautilus-tp2hq " Apr 29 00:42:28.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:28.569: INFO: stderr: "" Apr 29 00:42:28.569: INFO: stdout: "true" Apr 29 00:42:28.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:28.665: INFO: stderr: "" Apr 29 00:42:28.665: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 29 00:42:28.665: INFO: validating pod update-demo-nautilus-9778p Apr 29 00:42:28.669: INFO: got data: { "image": "nautilus.jpg" } Apr 29 00:42:28.669: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 00:42:28.669: INFO: update-demo-nautilus-9778p is verified up and running Apr 29 00:42:28.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp2hq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:28.801: INFO: stderr: "" Apr 29 00:42:28.801: INFO: stdout: "" Apr 29 00:42:28.801: INFO: update-demo-nautilus-tp2hq is created but not running Apr 29 00:42:33.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-75' Apr 29 00:42:33.932: INFO: stderr: "" Apr 29 00:42:33.932: INFO: stdout: "update-demo-nautilus-9778p update-demo-nautilus-tp2hq " Apr 29 00:42:33.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:34.027: INFO: stderr: "" Apr 29 00:42:34.027: INFO: stdout: "true" Apr 29 00:42:34.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9778p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:34.119: INFO: stderr: "" Apr 29 00:42:34.119: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 29 00:42:34.119: INFO: validating pod update-demo-nautilus-9778p Apr 29 00:42:34.123: INFO: got data: { "image": "nautilus.jpg" } Apr 29 00:42:34.123: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 00:42:34.123: INFO: update-demo-nautilus-9778p is verified up and running Apr 29 00:42:34.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp2hq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:34.212: INFO: stderr: "" Apr 29 00:42:34.212: INFO: stdout: "true" Apr 29 00:42:34.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tp2hq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-75' Apr 29 00:42:34.309: INFO: stderr: "" Apr 29 00:42:34.309: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 29 00:42:34.309: INFO: validating pod update-demo-nautilus-tp2hq Apr 29 00:42:34.313: INFO: got data: { "image": "nautilus.jpg" } Apr 29 00:42:34.313: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 29 00:42:34.313: INFO: update-demo-nautilus-tp2hq is verified up and running STEP: using delete to clean up resources Apr 29 00:42:34.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-75' Apr 29 00:42:34.432: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 00:42:34.432: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 29 00:42:34.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-75' Apr 29 00:42:34.530: INFO: stderr: "No resources found in kubectl-75 namespace.\n" Apr 29 00:42:34.530: INFO: stdout: "" Apr 29 00:42:34.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-75 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 29 00:42:34.626: INFO: stderr: "" Apr 29 00:42:34.626: INFO: stdout: "update-demo-nautilus-9778p\nupdate-demo-nautilus-tp2hq\n" Apr 29 00:42:35.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-75' Apr 29 00:42:35.226: INFO: stderr: "No resources found in kubectl-75 namespace.\n" Apr 29 00:42:35.226: INFO: stdout: "" Apr 29 00:42:35.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-75 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 29 00:42:35.305: INFO: stderr: "" Apr 29 00:42:35.305: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:42:35.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-75" for this suite. • [SLOW TEST:32.384 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":240,"skipped":4160,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:42:35.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:42:39.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3617" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":241,"skipped":4161,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:42:39.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 29 00:42:46.686: INFO: 7 pods remaining Apr 29 00:42:46.686: INFO: 0 pods has nil DeletionTimestamp Apr 29 00:42:46.686: INFO: Apr 29 00:42:47.527: INFO: 0 pods remaining Apr 29 00:42:47.527: INFO: 0 pods has nil DeletionTimestamp Apr 29 00:42:47.527: INFO: STEP: Gathering metrics W0429 00:42:48.545995 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 29 00:42:48.546: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:42:48.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2532" for this suite. • [SLOW TEST:9.040 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":242,"skipped":4163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:42:48.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0429 00:43:29.919100 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 29 00:43:29.919: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:43:29.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2702" for this suite. • [SLOW TEST:41.199 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":243,"skipped":4200,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:43:29.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:43:29.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5461" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":244,"skipped":4219,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:43:30.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:43:30.114: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 29 00:43:30.124: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:30.145: INFO: Number of nodes with available pods: 0 Apr 29 00:43:30.145: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:31.150: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:31.154: INFO: Number of nodes with available pods: 0 Apr 29 00:43:31.154: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:32.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:32.190: INFO: Number of nodes with available pods: 0 Apr 29 00:43:32.190: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:33.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:33.280: INFO: Number of nodes with available pods: 1 Apr 29 00:43:33.280: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:34.412: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:35.071: INFO: Number of nodes with available pods: 1 Apr 29 00:43:35.071: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:35.148: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:35.151: INFO: Number of nodes with available pods: 2 Apr 29 00:43:35.151: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 29 00:43:35.189: INFO: Wrong image for pod: daemon-set-g5jt9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:35.189: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:35.262: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:36.270: INFO: Wrong image for pod: daemon-set-g5jt9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:36.270: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:36.307: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:37.436: INFO: Wrong image for pod: daemon-set-g5jt9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:37.436: INFO: Pod daemon-set-g5jt9 is not available Apr 29 00:43:37.436: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:37.439: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:38.266: INFO: Wrong image for pod: daemon-set-g5jt9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:38.266: INFO: Pod daemon-set-g5jt9 is not available Apr 29 00:43:38.266: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:38.269: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:39.280: INFO: Wrong image for pod: daemon-set-g5jt9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:39.280: INFO: Pod daemon-set-g5jt9 is not available Apr 29 00:43:39.280: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:39.436: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:40.268: INFO: Wrong image for pod: daemon-set-g5jt9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:40.268: INFO: Pod daemon-set-g5jt9 is not available Apr 29 00:43:40.268: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:40.273: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:41.267: INFO: Wrong image for pod: daemon-set-g5jt9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:41.267: INFO: Pod daemon-set-g5jt9 is not available Apr 29 00:43:41.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:41.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:42.267: INFO: Wrong image for pod: daemon-set-g5jt9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:42.267: INFO: Pod daemon-set-g5jt9 is not available Apr 29 00:43:42.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:42.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:43.267: INFO: Pod daemon-set-hvp66 is not available Apr 29 00:43:43.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:43.272: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:44.267: INFO: Pod daemon-set-hvp66 is not available Apr 29 00:43:44.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:44.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:45.266: INFO: Pod daemon-set-hvp66 is not available Apr 29 00:43:45.266: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:45.269: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:46.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:46.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:47.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:47.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:48.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:48.267: INFO: Pod daemon-set-rvhnb is not available Apr 29 00:43:48.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:49.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:49.267: INFO: Pod daemon-set-rvhnb is not available Apr 29 00:43:49.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:50.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:50.267: INFO: Pod daemon-set-rvhnb is not available Apr 29 00:43:50.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:51.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:51.267: INFO: Pod daemon-set-rvhnb is not available Apr 29 00:43:51.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:52.267: INFO: Wrong image for pod: daemon-set-rvhnb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 29 00:43:52.268: INFO: Pod daemon-set-rvhnb is not available Apr 29 00:43:52.272: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:53.266: INFO: Pod daemon-set-5hkqc is not available Apr 29 00:43:53.269: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 29 00:43:53.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:53.274: INFO: Number of nodes with available pods: 1 Apr 29 00:43:53.274: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:54.278: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:54.281: INFO: Number of nodes with available pods: 1 Apr 29 00:43:54.281: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:55.850: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:55.928: INFO: Number of nodes with available pods: 1 Apr 29 00:43:55.928: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:56.279: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:56.282: INFO: Number of nodes with available pods: 1 Apr 29 00:43:56.282: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:57.279: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:57.283: INFO: Number of nodes with available pods: 1 Apr 29 00:43:57.283: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:43:58.279: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:43:58.282: INFO: Number of nodes with available pods: 2 Apr 29 00:43:58.282: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-327, will wait for the garbage collector to delete the pods Apr 29 00:43:58.354: INFO: Deleting DaemonSet.extensions daemon-set took: 6.729553ms Apr 29 00:43:58.954: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.268755ms Apr 29 00:44:13.058: INFO: Number of nodes with available pods: 0 Apr 29 00:44:13.058: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 00:44:13.060: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-327/daemonsets","resourceVersion":"11860098"},"items":null} Apr 29 00:44:13.063: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-327/pods","resourceVersion":"11860098"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:44:13.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-327" for this suite. • [SLOW TEST:43.058 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":245,"skipped":4222,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:44:13.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:44:29.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7072" for this suite. • [SLOW TEST:16.255 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":246,"skipped":4224,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:44:29.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:44:30.143: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:44:32.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717870, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717870, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717870, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717870, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:44:35.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:44:35.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7139" for this suite. STEP: Destroying namespace "webhook-7139-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.373 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":247,"skipped":4231,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:44:35.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 29 00:44:35.787: INFO: Waiting up to 5m0s for pod "downward-api-68a78f22-3917-479f-a120-6d1c77299334" in namespace "downward-api-706" to be "Succeeded or Failed" Apr 29 00:44:35.809: INFO: Pod "downward-api-68a78f22-3917-479f-a120-6d1c77299334": Phase="Pending", Reason="", readiness=false. Elapsed: 21.809202ms Apr 29 00:44:37.814: INFO: Pod "downward-api-68a78f22-3917-479f-a120-6d1c77299334": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026380383s Apr 29 00:44:39.817: INFO: Pod "downward-api-68a78f22-3917-479f-a120-6d1c77299334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02973847s STEP: Saw pod success Apr 29 00:44:39.817: INFO: Pod "downward-api-68a78f22-3917-479f-a120-6d1c77299334" satisfied condition "Succeeded or Failed" Apr 29 00:44:39.820: INFO: Trying to get logs from node latest-worker pod downward-api-68a78f22-3917-479f-a120-6d1c77299334 container dapi-container: STEP: delete the pod Apr 29 00:44:39.851: INFO: Waiting for pod downward-api-68a78f22-3917-479f-a120-6d1c77299334 to disappear Apr 29 00:44:39.867: INFO: Pod downward-api-68a78f22-3917-479f-a120-6d1c77299334 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:44:39.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-706" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4237,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:44:39.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 29 00:44:40.719: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 29 00:44:42.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717880, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717880, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717880, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723717880, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 29 00:44:45.751: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:44:45.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6107" for this suite. STEP: Destroying namespace "webhook-6107-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.067 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":249,"skipped":4245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:44:45.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5132 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5132 STEP: creating replication controller externalsvc in namespace services-5132 I0429 00:44:48.158973 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5132, replica count: 2 I0429 00:44:51.209477 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 00:44:54.209697 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 29 00:44:54.280: INFO: Creating new exec pod Apr 29 00:44:58.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5132 execpodm45fg -- /bin/sh -x -c nslookup nodeport-service' Apr 29 00:45:00.954: INFO: stderr: "I0429 00:45:00.846207 3409 log.go:172] (0xc00052fd90) (0xc000803540) Create stream\nI0429 00:45:00.846250 3409 log.go:172] (0xc00052fd90) (0xc000803540) Stream added, broadcasting: 1\nI0429 00:45:00.848385 3409 log.go:172] (0xc00052fd90) Reply frame received for 1\nI0429 00:45:00.848422 3409 log.go:172] (0xc00052fd90) (0xc00073e000) Create stream\nI0429 00:45:00.848431 3409 log.go:172] (0xc00052fd90) (0xc00073e000) Stream added, broadcasting: 3\nI0429 00:45:00.849406 3409 log.go:172] (0xc00052fd90) Reply frame received for 3\nI0429 00:45:00.849446 3409 log.go:172] (0xc00052fd90) (0xc000768000) Create stream\nI0429 00:45:00.849456 3409 log.go:172] (0xc00052fd90) (0xc000768000) Stream added, broadcasting: 5\nI0429 00:45:00.850335 3409 log.go:172] (0xc00052fd90) Reply frame received for 5\nI0429 00:45:00.939474 3409 log.go:172] (0xc00052fd90) Data frame received for 5\nI0429 00:45:00.939513 3409 log.go:172] (0xc000768000) (5) Data frame handling\nI0429 00:45:00.939544 3409 log.go:172] (0xc000768000) (5) Data frame sent\n+ nslookup nodeport-service\nI0429 00:45:00.946038 3409 log.go:172] (0xc00052fd90) Data frame received for 3\nI0429 00:45:00.946066 3409 log.go:172] (0xc00073e000) (3) Data frame handling\nI0429 00:45:00.946083 3409 log.go:172] (0xc00073e000) (3) Data frame sent\nI0429 00:45:00.946881 3409 log.go:172] (0xc00052fd90) Data frame received for 3\nI0429 00:45:00.946898 3409 log.go:172] (0xc00073e000) (3) Data frame handling\nI0429 00:45:00.946913 3409 log.go:172] (0xc00073e000) (3) Data frame sent\nI0429 00:45:00.947452 3409 log.go:172] (0xc00052fd90) Data frame received for 3\nI0429 00:45:00.947480 3409 log.go:172] (0xc00052fd90) Data frame received for 5\nI0429 00:45:00.947493 3409 log.go:172] (0xc000768000) (5) Data frame handling\nI0429 00:45:00.947509 3409 log.go:172] (0xc00073e000) (3) Data frame handling\nI0429 00:45:00.949086 3409 log.go:172] (0xc00052fd90) Data frame received for 1\nI0429 00:45:00.949242 3409 log.go:172] (0xc000803540) (1) Data frame handling\nI0429 00:45:00.949278 3409 log.go:172] (0xc000803540) (1) Data frame sent\nI0429 00:45:00.949292 3409 log.go:172] (0xc00052fd90) (0xc000803540) Stream removed, broadcasting: 1\nI0429 00:45:00.949307 3409 log.go:172] (0xc00052fd90) Go away received\nI0429 00:45:00.949592 3409 log.go:172] (0xc00052fd90) (0xc000803540) Stream removed, broadcasting: 1\nI0429 00:45:00.949609 3409 log.go:172] (0xc00052fd90) (0xc00073e000) Stream removed, broadcasting: 3\nI0429 00:45:00.949616 3409 log.go:172] (0xc00052fd90) (0xc000768000) Stream removed, broadcasting: 5\n" Apr 29 00:45:00.954: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5132.svc.cluster.local\tcanonical name = externalsvc.services-5132.svc.cluster.local.\nName:\texternalsvc.services-5132.svc.cluster.local\nAddress: 10.96.139.37\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5132, will wait for the garbage collector to delete the pods Apr 29 00:45:01.054: INFO: Deleting ReplicationController externalsvc took: 33.548596ms Apr 29 00:45:01.155: INFO: Terminating ReplicationController externalsvc pods took: 100.238803ms Apr 29 00:45:12.949: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:45:12.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5132" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:27.099 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":250,"skipped":4278,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:45:13.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-02a163e1-6569-4d22-8116-dd4036ebad38 in namespace container-probe-201 Apr 29 00:45:17.134: INFO: Started pod busybox-02a163e1-6569-4d22-8116-dd4036ebad38 in namespace container-probe-201 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 00:45:17.137: INFO: Initial restart count of pod busybox-02a163e1-6569-4d22-8116-dd4036ebad38 is 0 Apr 29 00:46:11.248: INFO: Restart count of pod container-probe-201/busybox-02a163e1-6569-4d22-8116-dd4036ebad38 is now 1 (54.110954089s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:46:11.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-201" for this suite. • [SLOW TEST:58.256 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:46:11.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-9aeeb9c9-56ba-42e7-a010-6de1e6a3e2cc STEP: Creating secret with name s-test-opt-upd-5921cdab-5157-4989-91ab-031d6e161b34 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9aeeb9c9-56ba-42e7-a010-6de1e6a3e2cc STEP: Updating secret s-test-opt-upd-5921cdab-5157-4989-91ab-031d6e161b34 STEP: Creating secret with name s-test-opt-create-8d9bb3c6-e5a1-4489-b69b-078ff6ef6721 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:47:37.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9760" for this suite. • [SLOW TEST:86.604 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4343,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:47:37.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 29 00:47:38.021: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:38.026: INFO: Number of nodes with available pods: 0 Apr 29 00:47:38.026: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:39.031: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:39.035: INFO: Number of nodes with available pods: 0 Apr 29 00:47:39.035: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:40.122: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:40.126: INFO: Number of nodes with available pods: 0 Apr 29 00:47:40.126: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:41.050: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:41.054: INFO: Number of nodes with available pods: 0 Apr 29 00:47:41.054: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:42.031: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:42.035: INFO: Number of nodes with available pods: 1 Apr 29 00:47:42.035: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:43.032: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:43.038: INFO: Number of nodes with available pods: 2 Apr 29 00:47:43.038: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 29 00:47:43.088: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:43.091: INFO: Number of nodes with available pods: 1 Apr 29 00:47:43.091: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:44.096: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:44.100: INFO: Number of nodes with available pods: 1 Apr 29 00:47:44.100: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:45.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:45.102: INFO: Number of nodes with available pods: 1 Apr 29 00:47:45.102: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:46.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:46.101: INFO: Number of nodes with available pods: 1 Apr 29 00:47:46.101: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:47.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:47.102: INFO: Number of nodes with available pods: 1 Apr 29 00:47:47.102: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:48.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:48.100: INFO: Number of nodes with available pods: 1 Apr 29 00:47:48.101: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:49.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:49.102: INFO: Number of nodes with available pods: 1 Apr 29 00:47:49.102: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:50.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:50.100: INFO: Number of nodes with available pods: 1 Apr 29 00:47:50.100: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:51.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:51.101: INFO: Number of nodes with available pods: 1 Apr 29 00:47:51.101: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:52.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:52.100: INFO: Number of nodes with available pods: 1 Apr 29 00:47:52.100: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:53.098: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:53.101: INFO: Number of nodes with available pods: 1 Apr 29 00:47:53.102: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:54.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:54.100: INFO: Number of nodes with available pods: 1 Apr 29 00:47:54.100: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:55.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:55.101: INFO: Number of nodes with available pods: 1 Apr 29 00:47:55.101: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:47:56.096: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:47:56.099: INFO: Number of nodes with available pods: 2 Apr 29 00:47:56.099: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9545, will wait for the garbage collector to delete the pods Apr 29 00:47:56.161: INFO: Deleting DaemonSet.extensions daemon-set took: 5.675949ms Apr 29 00:47:56.461: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.276834ms Apr 29 00:48:03.065: INFO: Number of nodes with available pods: 0 Apr 29 00:48:03.065: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 00:48:03.068: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9545/daemonsets","resourceVersion":"11861232"},"items":null} Apr 29 00:48:03.070: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9545/pods","resourceVersion":"11861232"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:03.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9545" for this suite. • [SLOW TEST:25.186 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":253,"skipped":4346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:03.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:48:03.172: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e6d56f1f-d8cd-46ea-8d23-72e5195dc430" in namespace "security-context-test-3968" to be "Succeeded or Failed" Apr 29 00:48:03.192: INFO: Pod "alpine-nnp-false-e6d56f1f-d8cd-46ea-8d23-72e5195dc430": Phase="Pending", Reason="", readiness=false. Elapsed: 19.670994ms Apr 29 00:48:05.195: INFO: Pod "alpine-nnp-false-e6d56f1f-d8cd-46ea-8d23-72e5195dc430": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023241364s Apr 29 00:48:07.200: INFO: Pod "alpine-nnp-false-e6d56f1f-d8cd-46ea-8d23-72e5195dc430": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027417511s Apr 29 00:48:07.200: INFO: Pod "alpine-nnp-false-e6d56f1f-d8cd-46ea-8d23-72e5195dc430" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:07.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3968" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4371,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:07.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 29 00:48:07.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b970e0d-4fbf-46f2-a879-46137b21de8d" in namespace "projected-7711" to be "Succeeded or Failed" Apr 29 00:48:07.316: INFO: Pod "downwardapi-volume-5b970e0d-4fbf-46f2-a879-46137b21de8d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.086438ms Apr 29 00:48:09.320: INFO: Pod "downwardapi-volume-5b970e0d-4fbf-46f2-a879-46137b21de8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007475148s Apr 29 00:48:11.325: INFO: Pod "downwardapi-volume-5b970e0d-4fbf-46f2-a879-46137b21de8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011864553s STEP: Saw pod success Apr 29 00:48:11.325: INFO: Pod "downwardapi-volume-5b970e0d-4fbf-46f2-a879-46137b21de8d" satisfied condition "Succeeded or Failed" Apr 29 00:48:11.328: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5b970e0d-4fbf-46f2-a879-46137b21de8d container client-container: STEP: delete the pod Apr 29 00:48:11.349: INFO: Waiting for pod downwardapi-volume-5b970e0d-4fbf-46f2-a879-46137b21de8d to disappear Apr 29 00:48:11.368: INFO: Pod downwardapi-volume-5b970e0d-4fbf-46f2-a879-46137b21de8d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:11.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7711" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4379,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:11.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 29 00:48:11.427: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 00:48:11.461: INFO: Waiting for terminating namespaces to be deleted... Apr 29 00:48:11.464: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 29 00:48:11.469: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:48:11.469: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 00:48:11.469: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:48:11.469: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 00:48:11.469: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 29 00:48:11.473: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:48:11.473: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 00:48:11.473: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 29 00:48:11.473: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 00:48:11.473: INFO: alpine-nnp-false-e6d56f1f-d8cd-46ea-8d23-72e5195dc430 from security-context-test-3968 started at 2020-04-29 00:48:03 +0000 UTC (1 container statuses recorded) Apr 29 00:48:11.473: INFO: Container alpine-nnp-false-e6d56f1f-d8cd-46ea-8d23-72e5195dc430 ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 29 00:48:11.548: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 29 00:48:11.548: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 29 00:48:11.548: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 29 00:48:11.548: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 29 00:48:11.548: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 Apr 29 00:48:11.554: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0b47d907-4e1c-46bb-ad8c-d15d667a8d3e.160a23e6871fcaf7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5340/filler-pod-0b47d907-4e1c-46bb-ad8c-d15d667a8d3e to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b47d907-4e1c-46bb-ad8c-d15d667a8d3e.160a23e71147695c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b47d907-4e1c-46bb-ad8c-d15d667a8d3e.160a23e769be0406], Reason = [Created], Message = [Created container filler-pod-0b47d907-4e1c-46bb-ad8c-d15d667a8d3e] STEP: Considering event: Type = [Normal], Name = [filler-pod-0b47d907-4e1c-46bb-ad8c-d15d667a8d3e.160a23e777ebc7b7], Reason = [Started], Message = [Started container filler-pod-0b47d907-4e1c-46bb-ad8c-d15d667a8d3e] STEP: Considering event: Type = [Normal], Name = [filler-pod-874d9bce-5c53-45c7-8009-7052db772406.160a23e68588858e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5340/filler-pod-874d9bce-5c53-45c7-8009-7052db772406 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-874d9bce-5c53-45c7-8009-7052db772406.160a23e6d63d1343], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-874d9bce-5c53-45c7-8009-7052db772406.160a23e7181ba305], Reason = [Created], Message = [Created container filler-pod-874d9bce-5c53-45c7-8009-7052db772406] STEP: Considering event: Type = [Normal], Name = [filler-pod-874d9bce-5c53-45c7-8009-7052db772406.160a23e72ca6cbfa], Reason = [Started], Message = [Started container filler-pod-874d9bce-5c53-45c7-8009-7052db772406] STEP: Considering event: Type = [Warning], Name = [additional-pod.160a23e7f1a69de3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:18.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5340" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:7.424 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":256,"skipped":4387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:18.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:48:18.861: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 6.262789ms) Apr 29 00:48:18.883: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 21.277349ms) Apr 29 00:48:18.886: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.609883ms) Apr 29 00:48:18.890: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.538407ms) Apr 29 00:48:18.894: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.70872ms) Apr 29 00:48:18.897: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.435497ms) Apr 29 00:48:18.900: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.292597ms) Apr 29 00:48:18.904: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.431108ms) Apr 29 00:48:18.907: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.26148ms) Apr 29 00:48:18.911: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.32497ms) Apr 29 00:48:18.914: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.747794ms) Apr 29 00:48:18.918: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.152168ms) Apr 29 00:48:18.921: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.96312ms) Apr 29 00:48:18.924: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.331368ms) Apr 29 00:48:18.928: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.697184ms) Apr 29 00:48:18.931: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.406381ms) Apr 29 00:48:18.934: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.148894ms) Apr 29 00:48:18.938: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.445069ms) Apr 29 00:48:18.941: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.482781ms) Apr 29 00:48:18.945: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.310266ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:18.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2412" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":257,"skipped":4466,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:18.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-f7e22ce6-27db-48d6-a790-1f6ae071cc4a [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:19.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7451" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":258,"skipped":4473,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:19.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:19.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5283" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":259,"skipped":4474,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:19.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 29 00:48:19.236: INFO: Waiting up to 5m0s for pod "client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0" in namespace "containers-9593" to be "Succeeded or Failed" Apr 29 00:48:19.239: INFO: Pod "client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.079874ms Apr 29 00:48:21.243: INFO: Pod "client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006913345s Apr 29 00:48:23.247: INFO: Pod "client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0": Phase="Running", Reason="", readiness=true. Elapsed: 4.011440306s Apr 29 00:48:25.257: INFO: Pod "client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021715857s STEP: Saw pod success Apr 29 00:48:25.258: INFO: Pod "client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0" satisfied condition "Succeeded or Failed" Apr 29 00:48:25.261: INFO: Trying to get logs from node latest-worker2 pod client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0 container test-container: STEP: delete the pod Apr 29 00:48:25.345: INFO: Waiting for pod client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0 to disappear Apr 29 00:48:25.360: INFO: Pod client-containers-d5ae8192-09d4-49d0-b1cf-6721932e16c0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:25.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9593" for this suite. • [SLOW TEST:6.177 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4481,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:25.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 29 00:48:25.438: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:48:40.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9232" for this suite. • [SLOW TEST:15.331 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":261,"skipped":4494,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:48:40.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 29 00:48:40.800: INFO: Create a RollingUpdate DaemonSet Apr 29 00:48:40.802: INFO: Check that daemon pods launch on every node of the cluster Apr 29 00:48:40.812: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:48:40.817: INFO: Number of nodes with available pods: 0 Apr 29 00:48:40.817: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:48:41.821: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:48:41.824: INFO: Number of nodes with available pods: 0 Apr 29 00:48:41.824: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:48:42.822: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:48:42.825: INFO: Number of nodes with available pods: 0 Apr 29 00:48:42.825: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:48:43.822: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:48:43.872: INFO: Number of nodes with available pods: 0 Apr 29 00:48:43.872: INFO: Node latest-worker is running more than one daemon pod Apr 29 00:48:44.822: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:48:44.825: INFO: Number of nodes with available pods: 2 Apr 29 00:48:44.825: INFO: Number of running nodes: 2, number of available pods: 2 Apr 29 00:48:44.825: INFO: Update the DaemonSet to trigger a rollout Apr 29 00:48:44.831: INFO: Updating DaemonSet daemon-set Apr 29 00:48:53.849: INFO: Roll back the DaemonSet before rollout is complete Apr 29 00:48:53.855: INFO: Updating DaemonSet daemon-set Apr 29 00:48:53.855: INFO: Make sure DaemonSet rollback is complete Apr 29 00:48:53.865: INFO: Wrong image for pod: daemon-set-v4n9g. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 29 00:48:53.865: INFO: Pod daemon-set-v4n9g is not available Apr 29 00:48:53.901: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:48:54.906: INFO: Wrong image for pod: daemon-set-v4n9g. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 29 00:48:54.906: INFO: Pod daemon-set-v4n9g is not available Apr 29 00:48:54.910: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 00:48:55.907: INFO: Pod daemon-set-g7t7s is not available Apr 29 00:48:55.910: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4095, will wait for the garbage collector to delete the pods Apr 29 00:48:55.973: INFO: Deleting DaemonSet.extensions daemon-set took: 5.869939ms Apr 29 00:49:06.674: INFO: Terminating DaemonSet.extensions daemon-set pods took: 10.70026061s Apr 29 00:49:09.691: INFO: Number of nodes with available pods: 0 Apr 29 00:49:09.691: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 00:49:09.693: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4095/daemonsets","resourceVersion":"11861708"},"items":null} Apr 29 00:49:09.695: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4095/pods","resourceVersion":"11861708"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:49:09.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4095" for this suite. • [SLOW TEST:29.011 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":262,"skipped":4508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:49:09.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 29 00:49:09.773: INFO: Waiting up to 5m0s for pod "downward-api-cb30216e-b561-481b-9ce8-d4d7b7cfc8bb" in namespace "downward-api-1305" to be "Succeeded or Failed" Apr 29 00:49:09.841: INFO: Pod "downward-api-cb30216e-b561-481b-9ce8-d4d7b7cfc8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 68.544571ms Apr 29 00:49:11.846: INFO: Pod "downward-api-cb30216e-b561-481b-9ce8-d4d7b7cfc8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073235529s Apr 29 00:49:13.850: INFO: Pod "downward-api-cb30216e-b561-481b-9ce8-d4d7b7cfc8bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077490753s STEP: Saw pod success Apr 29 00:49:13.850: INFO: Pod "downward-api-cb30216e-b561-481b-9ce8-d4d7b7cfc8bb" satisfied condition "Succeeded or Failed" Apr 29 00:49:13.853: INFO: Trying to get logs from node latest-worker2 pod downward-api-cb30216e-b561-481b-9ce8-d4d7b7cfc8bb container dapi-container: STEP: delete the pod Apr 29 00:49:13.890: INFO: Waiting for pod downward-api-cb30216e-b561-481b-9ce8-d4d7b7cfc8bb to disappear Apr 29 00:49:13.902: INFO: Pod downward-api-cb30216e-b561-481b-9ce8-d4d7b7cfc8bb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:49:13.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1305" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4532,"failed":0} SSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:49:13.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:49:27.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7045" for this suite. • [SLOW TEST:14.092 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":264,"skipped":4535,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:49:28.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 29 00:49:28.062: INFO: Waiting up to 5m0s for pod "pod-31c5ea76-cab9-480f-84d4-0c4715e5bb5a" in namespace "emptydir-7336" to be "Succeeded or Failed" Apr 29 00:49:28.064: INFO: Pod "pod-31c5ea76-cab9-480f-84d4-0c4715e5bb5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227611ms Apr 29 00:49:30.068: INFO: Pod "pod-31c5ea76-cab9-480f-84d4-0c4715e5bb5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006133577s Apr 29 00:49:32.072: INFO: Pod "pod-31c5ea76-cab9-480f-84d4-0c4715e5bb5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010256428s STEP: Saw pod success Apr 29 00:49:32.072: INFO: Pod "pod-31c5ea76-cab9-480f-84d4-0c4715e5bb5a" satisfied condition "Succeeded or Failed" Apr 29 00:49:32.075: INFO: Trying to get logs from node latest-worker pod pod-31c5ea76-cab9-480f-84d4-0c4715e5bb5a container test-container: STEP: delete the pod Apr 29 00:49:32.106: INFO: Waiting for pod pod-31c5ea76-cab9-480f-84d4-0c4715e5bb5a to disappear Apr 29 00:49:32.260: INFO: Pod pod-31c5ea76-cab9-480f-84d4-0c4715e5bb5a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:49:32.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7336" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4552,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:49:32.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-fbxf STEP: Creating a pod to test atomic-volume-subpath Apr 29 00:49:32.334: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fbxf" in namespace "subpath-6803" to be "Succeeded or Failed" Apr 29 00:49:32.338: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.738562ms Apr 29 00:49:34.341: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006939635s Apr 29 00:49:36.345: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 4.010964639s Apr 29 00:49:38.349: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 6.015094941s Apr 29 00:49:40.354: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 8.019462666s Apr 29 00:49:42.358: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 10.023819143s Apr 29 00:49:44.363: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 12.028255613s Apr 29 00:49:46.367: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 14.032202696s Apr 29 00:49:48.371: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 16.036706911s Apr 29 00:49:50.375: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 18.040986131s Apr 29 00:49:52.380: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 20.045403079s Apr 29 00:49:54.384: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Running", Reason="", readiness=true. Elapsed: 22.049659984s Apr 29 00:49:56.388: INFO: Pod "pod-subpath-test-projected-fbxf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053869901s STEP: Saw pod success Apr 29 00:49:56.388: INFO: Pod "pod-subpath-test-projected-fbxf" satisfied condition "Succeeded or Failed" Apr 29 00:49:56.391: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-fbxf container test-container-subpath-projected-fbxf: STEP: delete the pod Apr 29 00:49:56.419: INFO: Waiting for pod pod-subpath-test-projected-fbxf to disappear Apr 29 00:49:56.446: INFO: Pod pod-subpath-test-projected-fbxf no longer exists STEP: Deleting pod pod-subpath-test-projected-fbxf Apr 29 00:49:56.446: INFO: Deleting pod "pod-subpath-test-projected-fbxf" in namespace "subpath-6803" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:49:56.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6803" for this suite. • [SLOW TEST:24.187 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":266,"skipped":4564,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:49:56.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:50:31.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2030" for this suite. • [SLOW TEST:34.784 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4581,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:50:31.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 29 00:50:31.322: INFO: Waiting up to 5m0s for pod "pod-97e56bbe-d574-4df4-94b4-5afbb17f471e" in namespace "emptydir-3457" to be "Succeeded or Failed" Apr 29 00:50:31.346: INFO: Pod "pod-97e56bbe-d574-4df4-94b4-5afbb17f471e": Phase="Pending", Reason="", readiness=false. Elapsed: 23.979762ms Apr 29 00:50:33.350: INFO: Pod "pod-97e56bbe-d574-4df4-94b4-5afbb17f471e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027884136s Apr 29 00:50:35.354: INFO: Pod "pod-97e56bbe-d574-4df4-94b4-5afbb17f471e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031578641s STEP: Saw pod success Apr 29 00:50:35.354: INFO: Pod "pod-97e56bbe-d574-4df4-94b4-5afbb17f471e" satisfied condition "Succeeded or Failed" Apr 29 00:50:35.356: INFO: Trying to get logs from node latest-worker pod pod-97e56bbe-d574-4df4-94b4-5afbb17f471e container test-container: STEP: delete the pod Apr 29 00:50:35.395: INFO: Waiting for pod pod-97e56bbe-d574-4df4-94b4-5afbb17f471e to disappear Apr 29 00:50:35.407: INFO: Pod pod-97e56bbe-d574-4df4-94b4-5afbb17f471e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:50:35.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3457" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4583,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:50:35.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 29 00:50:40.027: INFO: Successfully updated pod "annotationupdate6f2fb631-f193-46ff-96a0-d08084f738eb" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:50:42.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8525" for this suite. • [SLOW TEST:6.657 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:50:42.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-7fc243fb-1b3e-4890-a5c3-9a14c3faa421 STEP: Creating configMap with name cm-test-opt-upd-8e7d899e-63d3-4379-9899-9956369489dc STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7fc243fb-1b3e-4890-a5c3-9a14c3faa421 STEP: Updating configmap cm-test-opt-upd-8e7d899e-63d3-4379-9899-9956369489dc STEP: Creating configMap with name cm-test-opt-create-953528b7-24ab-4f19-93d5-1bfae3ec39f6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:52:02.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-558" for this suite. • [SLOW TEST:80.551 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:52:02.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-cv96 STEP: Creating a pod to test atomic-volume-subpath Apr 29 00:52:02.698: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cv96" in namespace "subpath-7333" to be "Succeeded or Failed" Apr 29 00:52:02.714: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Pending", Reason="", readiness=false. Elapsed: 16.077151ms Apr 29 00:52:04.718: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019913198s Apr 29 00:52:06.735: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 4.037717995s Apr 29 00:52:08.819: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 6.12138295s Apr 29 00:52:10.823: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 8.125001812s Apr 29 00:52:12.827: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 10.128782279s Apr 29 00:52:14.837: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 12.139287615s Apr 29 00:52:16.839: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 14.14171689s Apr 29 00:52:18.863: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 16.165224017s Apr 29 00:52:20.867: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 18.169141244s Apr 29 00:52:22.871: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 20.173181233s Apr 29 00:52:24.875: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Running", Reason="", readiness=true. Elapsed: 22.177360729s Apr 29 00:52:26.879: INFO: Pod "pod-subpath-test-configmap-cv96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.181034218s STEP: Saw pod success Apr 29 00:52:26.879: INFO: Pod "pod-subpath-test-configmap-cv96" satisfied condition "Succeeded or Failed" Apr 29 00:52:26.882: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-cv96 container test-container-subpath-configmap-cv96: STEP: delete the pod Apr 29 00:52:26.918: INFO: Waiting for pod pod-subpath-test-configmap-cv96 to disappear Apr 29 00:52:26.941: INFO: Pod pod-subpath-test-configmap-cv96 no longer exists STEP: Deleting pod pod-subpath-test-configmap-cv96 Apr 29 00:52:26.941: INFO: Deleting pod "pod-subpath-test-configmap-cv96" in namespace "subpath-7333" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:52:26.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7333" for this suite. • [SLOW TEST:24.327 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":271,"skipped":4688,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:52:26.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-1b6ff204-dc51-4238-87dd-5cd0ca1b1f8b Apr 29 00:52:27.108: INFO: Pod name my-hostname-basic-1b6ff204-dc51-4238-87dd-5cd0ca1b1f8b: Found 0 pods out of 1 Apr 29 00:52:32.112: INFO: Pod name my-hostname-basic-1b6ff204-dc51-4238-87dd-5cd0ca1b1f8b: Found 1 pods out of 1 Apr 29 00:52:32.112: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1b6ff204-dc51-4238-87dd-5cd0ca1b1f8b" are running Apr 29 00:52:32.114: INFO: Pod "my-hostname-basic-1b6ff204-dc51-4238-87dd-5cd0ca1b1f8b-c95df" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 00:52:27 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 00:52:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 00:52:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 00:52:27 +0000 UTC Reason: Message:}]) Apr 29 00:52:32.115: INFO: Trying to dial the pod Apr 29 00:52:37.126: INFO: Controller my-hostname-basic-1b6ff204-dc51-4238-87dd-5cd0ca1b1f8b: Got expected result from replica 1 [my-hostname-basic-1b6ff204-dc51-4238-87dd-5cd0ca1b1f8b-c95df]: "my-hostname-basic-1b6ff204-dc51-4238-87dd-5cd0ca1b1f8b-c95df", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:52:37.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5231" for this suite. • [SLOW TEST:10.181 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":272,"skipped":4694,"failed":0} [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:52:37.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 29 00:52:37.205: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 29 00:52:42.208: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:52:42.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3427" for this suite. • [SLOW TEST:5.192 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":273,"skipped":4694,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:52:42.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 29 00:52:42.414: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:52:50.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7200" for this suite. • [SLOW TEST:8.054 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":274,"skipped":4716,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 29 00:52:50.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-af678fb3-8403-4d09-ab63-28eef8ef6ac2 STEP: Creating a pod to test consume configMaps Apr 29 00:52:50.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-e53d8fd7-648a-4741-abee-1732c09dbafe" in namespace "configmap-703" to be "Succeeded or Failed" Apr 29 00:52:50.499: INFO: Pod "pod-configmaps-e53d8fd7-648a-4741-abee-1732c09dbafe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446301ms Apr 29 00:52:52.502: INFO: Pod "pod-configmaps-e53d8fd7-648a-4741-abee-1732c09dbafe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01226209s Apr 29 00:52:54.507: INFO: Pod "pod-configmaps-e53d8fd7-648a-4741-abee-1732c09dbafe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016728578s STEP: Saw pod success Apr 29 00:52:54.507: INFO: Pod "pod-configmaps-e53d8fd7-648a-4741-abee-1732c09dbafe" satisfied condition "Succeeded or Failed" Apr 29 00:52:54.510: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e53d8fd7-648a-4741-abee-1732c09dbafe container configmap-volume-test: STEP: delete the pod Apr 29 00:52:54.536: INFO: Waiting for pod pod-configmaps-e53d8fd7-648a-4741-abee-1732c09dbafe to disappear Apr 29 00:52:54.541: INFO: Pod pod-configmaps-e53d8fd7-648a-4741-abee-1732c09dbafe no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 29 00:52:54.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-703" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4716,"failed":0} SApr 29 00:52:54.549: INFO: Running AfterSuite actions on all nodes Apr 29 00:52:54.549: INFO: Running AfterSuite actions on node 1 Apr 29 00:52:54.549: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4519.086 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS