I0308 13:19:09.686735 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0308 13:19:09.686971 6 e2e.go:109] Starting e2e run "9dbdea0e-215a-4859-904d-cbfe7e9b5fdf" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583673548 - Will randomize all specs Will run 278 of 4814 specs Mar 8 13:19:09.737: INFO: >>> kubeConfig: /root/.kube/config Mar 8 13:19:09.741: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 8 13:19:09.769: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 8 13:19:09.810: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 8 13:19:09.810: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 8 13:19:09.810: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 8 13:19:09.820: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 8 13:19:09.820: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 8 13:19:09.820: INFO: e2e test version: v1.17.0 Mar 8 13:19:09.821: INFO: kube-apiserver version: v1.17.0 Mar 8 13:19:09.821: INFO: >>> kubeConfig: /root/.kube/config Mar 8 13:19:09.827: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:19:09.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition Mar 8 13:19:09.875: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:19:09.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4994" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:19:09.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:19:09.947: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 8 13:19:14.950: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 13:19:18.956: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 8 13:19:20.960: INFO: Creating deployment "test-rollover-deployment" Mar 8 13:19:20.966: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 8 13:19:22.972: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 8 13:19:22.978: INFO: Ensure that both replica sets have 1 created replica Mar 8 13:19:22.984: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 8 13:19:22.991: INFO: Updating deployment test-rollover-deployment Mar 8 13:19:22.991: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 8 13:19:25.002: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 8 13:19:25.008: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 8 13:19:25.014: INFO: all replica sets need to contain the pod-template-hash label Mar 8 13:19:25.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270364, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270360, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 13:19:27.021: INFO: all replica sets need to contain the pod-template-hash label Mar 8 13:19:27.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270364, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270360, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 13:19:29.021: INFO: all replica sets need to contain the pod-template-hash label Mar 8 13:19:29.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270364, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270360, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 13:19:31.021: INFO: all replica sets need to contain the pod-template-hash label Mar 8 13:19:31.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270364, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270360, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 13:19:33.021: INFO: all replica sets need to contain the pod-template-hash label Mar 8 13:19:33.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270361, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270364, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270360, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 13:19:35.021: INFO: Mar 8 13:19:35.021: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 13:19:35.030: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3769 /apis/apps/v1/namespaces/deployment-3769/deployments/test-rollover-deployment 19d63894-0b66-4fde-b7bb-b08d3f2f34df 6405 2 2020-03-08 13:19:20 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a44918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 13:19:21 +0000 UTC,LastTransitionTime:2020-03-08 13:19:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-08 13:19:34 +0000 UTC,LastTransitionTime:2020-03-08 13:19:20 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 13:19:35.033: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-3769 /apis/apps/v1/namespaces/deployment-3769/replicasets/test-rollover-deployment-574d6dfbff fb24a431-3d3a-42b2-bc10-e87adff81370 6394 2 2020-03-08 13:19:22 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 19d63894-0b66-4fde-b7bb-b08d3f2f34df 0xc002a78477 0xc002a78478}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a784e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 13:19:35.033: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 8 13:19:35.033: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3769 /apis/apps/v1/namespaces/deployment-3769/replicasets/test-rollover-controller b4c5e8eb-7cd9-4c0a-8a0f-1a3a039cc77b 6403 2 2020-03-08 13:19:09 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 19d63894-0b66-4fde-b7bb-b08d3f2f34df 0xc002a7839f 0xc002a783b0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002a78418 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 13:19:35.033: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-3769 /apis/apps/v1/namespaces/deployment-3769/replicasets/test-rollover-deployment-f6c94f66c 27d258f8-5ec2-433f-9a75-fb91a3642798 6356 2 2020-03-08 13:19:20 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 19d63894-0b66-4fde-b7bb-b08d3f2f34df 0xc002a78540 0xc002a78541}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a785b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 13:19:35.037: INFO: Pod "test-rollover-deployment-574d6dfbff-qgfqz" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-qgfqz test-rollover-deployment-574d6dfbff- deployment-3769 /api/v1/namespaces/deployment-3769/pods/test-rollover-deployment-574d6dfbff-qgfqz 8e3fc53d-3b35-4870-a43c-4935451c95d8 6362 0 2020-03-08 13:19:23 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff fb24a431-3d3a-42b2-bc10-e87adff81370 0xc002a78ac7 0xc002a78ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nffn4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nffn4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nffn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:19:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:19:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:19:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.23,StartTime:2020-03-08 13:19:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 13:19:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://601181224184ecc13b2f27699e40f49dbd9e03ad0b40c498ec92fb5df6068098,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:19:35.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3769" for this suite. • [SLOW TEST:25.158 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":2,"skipped":33,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:19:35.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0308 13:19:45.151194 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 13:19:45.151: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:19:45.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4268" for this suite. • [SLOW TEST:10.112 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":3,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:19:45.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:19:52.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2674" for this suite. • [SLOW TEST:7.081 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":4,"skipped":81,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:19:52.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 13:19:56.332: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:19:56.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5286" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:19:56.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 13:19:57.281: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:20:00.302: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:20:00.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:20:01.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9753" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:5.314 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":6,"skipped":129,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:20:01.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 8 13:20:03.748: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 8 13:20:08.892: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:20:08.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4957" for this suite. • [SLOW TEST:7.231 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":7,"skipped":145,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:20:08.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:20:09.344: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:20:12.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:20:12.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3067" for this suite. STEP: Destroying namespace "webhook-3067-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":8,"skipped":155,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:20:12.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 13:20:12.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5990' Mar 8 13:20:14.642: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 13:20:14.642: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 8 13:20:14.659: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-fwppq] Mar 8 13:20:14.659: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-fwppq" in namespace "kubectl-5990" to be "running and ready" Mar 8 13:20:14.703: INFO: Pod "e2e-test-httpd-rc-fwppq": Phase="Pending", Reason="", readiness=false. Elapsed: 44.115721ms Mar 8 13:20:16.707: INFO: Pod "e2e-test-httpd-rc-fwppq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048182919s Mar 8 13:20:18.711: INFO: Pod "e2e-test-httpd-rc-fwppq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052507874s Mar 8 13:20:20.715: INFO: Pod "e2e-test-httpd-rc-fwppq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056540907s Mar 8 13:20:22.719: INFO: Pod "e2e-test-httpd-rc-fwppq": Phase="Running", Reason="", readiness=true. Elapsed: 8.060391274s Mar 8 13:20:22.719: INFO: Pod "e2e-test-httpd-rc-fwppq" satisfied condition "running and ready" Mar 8 13:20:22.719: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-fwppq] Mar 8 13:20:22.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5990' Mar 8 13:20:22.894: INFO: stderr: "" Mar 8 13:20:22.894: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.25. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.25. Set the 'ServerName' directive globally to suppress this message\n[Sun Mar 08 13:20:21.766918 2020] [mpm_event:notice] [pid 1:tid 139761288686440] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Mar 08 13:20:21.766999 2020] [core:notice] [pid 1:tid 139761288686440] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 8 13:20:22.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5990' Mar 8 13:20:23.027: INFO: stderr: "" Mar 8 13:20:23.027: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:20:23.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5990" for this suite. • [SLOW TEST:10.479 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":9,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:20:23.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:20:23.110: INFO: Create a RollingUpdate DaemonSet Mar 8 13:20:23.114: INFO: Check that daemon pods launch on every node of the cluster Mar 8 13:20:23.121: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:20:23.126: INFO: Number of nodes with available pods: 0 Mar 8 13:20:23.126: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:20:24.130: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:20:24.134: INFO: Number of nodes with available pods: 0 Mar 8 13:20:24.134: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:20:25.130: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:20:25.134: INFO: Number of nodes with available pods: 1 Mar 8 13:20:25.134: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:20:26.130: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:20:26.133: INFO: Number of nodes with available pods: 2 Mar 8 13:20:26.133: INFO: Number of running nodes: 2, number of available pods: 2 Mar 8 13:20:26.133: INFO: Update the DaemonSet to trigger a rollout Mar 8 13:20:26.141: INFO: Updating DaemonSet daemon-set Mar 8 13:20:40.162: INFO: Roll back the DaemonSet before rollout is complete Mar 8 13:20:40.168: INFO: Updating DaemonSet daemon-set Mar 8 13:20:40.168: INFO: Make sure DaemonSet rollback is complete Mar 8 13:20:40.171: INFO: Wrong image for pod: daemon-set-sfnd6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 8 13:20:40.171: INFO: Pod daemon-set-sfnd6 is not available Mar 8 13:20:40.177: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:20:41.182: INFO: Wrong image for pod: daemon-set-sfnd6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 8 13:20:41.182: INFO: Pod daemon-set-sfnd6 is not available Mar 8 13:20:41.185: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:20:42.181: INFO: Wrong image for pod: daemon-set-sfnd6. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 8 13:20:42.181: INFO: Pod daemon-set-sfnd6 is not available Mar 8 13:20:42.194: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:20:43.188: INFO: Pod daemon-set-ls5x2 is not available Mar 8 13:20:43.191: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3261, will wait for the garbage collector to delete the pods Mar 8 13:20:43.254: INFO: Deleting DaemonSet.extensions daemon-set took: 5.593979ms Mar 8 13:20:43.355: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.200679ms Mar 8 13:20:49.457: INFO: Number of nodes with available pods: 0 Mar 8 13:20:49.457: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 13:20:49.460: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3261/daemonsets","resourceVersion":"7023"},"items":null} Mar 8 13:20:49.463: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3261/pods","resourceVersion":"7023"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:20:49.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3261" for this suite. • [SLOW TEST:26.442 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":10,"skipped":185,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:20:49.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:20:49.539: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-46388d04-22ce-4ea2-a621-748e13660df0" in namespace "security-context-test-7711" to be "success or failure" Mar 8 13:20:49.541: INFO: Pod "busybox-readonly-false-46388d04-22ce-4ea2-a621-748e13660df0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195414ms Mar 8 13:20:51.545: INFO: Pod "busybox-readonly-false-46388d04-22ce-4ea2-a621-748e13660df0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005389943s Mar 8 13:20:51.545: INFO: Pod "busybox-readonly-false-46388d04-22ce-4ea2-a621-748e13660df0" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:20:51.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7711" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":194,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:20:51.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-eff5307c-9125-425f-af0c-2d735f819a29 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:20:51.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7892" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":12,"skipped":195,"failed":0} SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:20:51.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 8 13:20:51.707: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6867" to be "success or failure" Mar 8 13:20:51.710: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303604ms Mar 8 13:20:53.716: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009295718s Mar 8 13:20:55.722: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015727958s Mar 8 13:20:57.727: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020333422s STEP: Saw pod success Mar 8 13:20:57.727: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 8 13:20:57.730: INFO: Trying to get logs from node kind-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 8 13:20:57.764: INFO: Waiting for pod pod-host-path-test to disappear Mar 8 13:20:57.770: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:20:57.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6867" for this suite. • [SLOW TEST:6.151 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":197,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:20:57.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:21:05.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6721" for this suite. • [SLOW TEST:8.095 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":14,"skipped":203,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:21:05.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:21:05.982: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e4380738-ad71-4f23-b656-665bcc0a2cf9" in namespace "security-context-test-7751" to be "success or failure" Mar 8 13:21:05.986: INFO: Pod "busybox-user-65534-e4380738-ad71-4f23-b656-665bcc0a2cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184485ms Mar 8 13:21:08.008: INFO: Pod "busybox-user-65534-e4380738-ad71-4f23-b656-665bcc0a2cf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026385703s Mar 8 13:21:08.009: INFO: Pod "busybox-user-65534-e4380738-ad71-4f23-b656-665bcc0a2cf9" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:21:08.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7751" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":205,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:21:08.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 13:21:08.066: INFO: Waiting up to 5m0s for pod "pod-31cf799f-bb91-4726-b009-c747aafe0cc0" in namespace "emptydir-5727" to be "success or failure" Mar 8 13:21:08.071: INFO: Pod "pod-31cf799f-bb91-4726-b009-c747aafe0cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.675909ms Mar 8 13:21:10.074: INFO: Pod "pod-31cf799f-bb91-4726-b009-c747aafe0cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008092513s Mar 8 13:21:12.078: INFO: Pod "pod-31cf799f-bb91-4726-b009-c747aafe0cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011922581s STEP: Saw pod success Mar 8 13:21:12.078: INFO: Pod "pod-31cf799f-bb91-4726-b009-c747aafe0cc0" satisfied condition "success or failure" Mar 8 13:21:12.081: INFO: Trying to get logs from node kind-worker2 pod pod-31cf799f-bb91-4726-b009-c747aafe0cc0 container test-container: STEP: delete the pod Mar 8 13:21:12.124: INFO: Waiting for pod pod-31cf799f-bb91-4726-b009-c747aafe0cc0 to disappear Mar 8 13:21:12.127: INFO: Pod pod-31cf799f-bb91-4726-b009-c747aafe0cc0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:21:12.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5727" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:21:12.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:21:12.179: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:21:13.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4697" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":17,"skipped":282,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:21:13.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:21:37.302: INFO: Container started at 2020-03-08 13:21:16 +0000 UTC, pod became ready at 2020-03-08 13:21:35 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:21:37.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7794" for this suite. • [SLOW TEST:24.101 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":289,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:21:37.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 8 13:21:39.894: INFO: Successfully updated pod "labelsupdateae60ad48-2272-41d3-8cc4-9bae6b152759" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:21:41.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6154" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:21:41.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 STEP: creating an pod Mar 8 13:21:42.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8633 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 8 13:21:42.122: INFO: stderr: "" Mar 8 13:21:42.122: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 8 13:21:42.122: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 8 13:21:42.122: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8633" to be "running and ready, or succeeded" Mar 8 13:21:42.130: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095112ms Mar 8 13:21:44.135: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.012502927s Mar 8 13:21:44.135: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 8 13:21:44.135: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 8 13:21:44.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8633' Mar 8 13:21:44.266: INFO: stderr: "" Mar 8 13:21:44.266: INFO: stdout: "I0308 13:21:43.272880 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/x8vv 596\nI0308 13:21:43.473056 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/vbcb 593\nI0308 13:21:43.673150 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/2jd 244\nI0308 13:21:43.873058 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/5ldm 262\nI0308 13:21:44.073071 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/rcmx 457\n" STEP: limiting log lines Mar 8 13:21:44.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8633 --tail=1' Mar 8 13:21:44.363: INFO: stderr: "" Mar 8 13:21:44.363: INFO: stdout: "I0308 13:21:44.273086 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/bx7 474\n" Mar 8 13:21:44.363: INFO: got output "I0308 13:21:44.273086 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/bx7 474\n" STEP: limiting log bytes Mar 8 13:21:44.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8633 --limit-bytes=1' Mar 8 13:21:44.462: INFO: stderr: "" Mar 8 13:21:44.462: INFO: stdout: "I" Mar 8 13:21:44.462: INFO: got output "I" STEP: exposing timestamps Mar 8 13:21:44.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8633 --tail=1 --timestamps' Mar 8 13:21:44.562: INFO: stderr: "" Mar 8 13:21:44.562: INFO: stdout: "2020-03-08T13:21:44.473196157Z I0308 13:21:44.473068 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/x29p 260\n" Mar 8 13:21:44.562: INFO: got output "2020-03-08T13:21:44.473196157Z I0308 13:21:44.473068 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/x29p 260\n" STEP: restricting to a time range Mar 8 13:21:47.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8633 --since=1s' Mar 8 13:21:47.181: INFO: stderr: "" Mar 8 13:21:47.181: INFO: stdout: "I0308 13:21:46.273068 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nb8w 371\nI0308 13:21:46.473074 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/22v 340\nI0308 13:21:46.673042 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/wt6h 439\nI0308 13:21:46.873164 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/cqmr 342\nI0308 13:21:47.073032 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/v59 268\n" Mar 8 13:21:47.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8633 --since=24h' Mar 8 13:21:47.272: INFO: stderr: "" Mar 8 13:21:47.272: INFO: stdout: "I0308 13:21:43.272880 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/x8vv 596\nI0308 13:21:43.473056 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/vbcb 593\nI0308 13:21:43.673150 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/2jd 244\nI0308 13:21:43.873058 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/5ldm 262\nI0308 13:21:44.073071 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/rcmx 457\nI0308 13:21:44.273086 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/bx7 474\nI0308 13:21:44.473068 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/x29p 260\nI0308 13:21:44.673094 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/jbx 261\nI0308 13:21:44.873042 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/dvj 263\nI0308 13:21:45.073038 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/qpg 587\nI0308 13:21:45.273047 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/jdd 596\nI0308 13:21:45.473069 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/ctv 257\nI0308 13:21:45.673049 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/wdm 544\nI0308 13:21:45.873031 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/kfw 441\nI0308 13:21:46.073034 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/8vk 344\nI0308 13:21:46.273068 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nb8w 371\nI0308 13:21:46.473074 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/22v 340\nI0308 13:21:46.673042 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/wt6h 439\nI0308 13:21:46.873164 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/cqmr 342\nI0308 13:21:47.073032 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/v59 268\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 Mar 8 13:21:47.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8633' Mar 8 13:21:58.643: INFO: stderr: "" Mar 8 13:21:58.643: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:21:58.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8633" for this suite. • [SLOW TEST:16.723 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":20,"skipped":356,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:21:58.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:21:59.635: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 13:22:01.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270519, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270519, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270519, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270519, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:22:04.687: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:22:04.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2233" for this suite. STEP: Destroying namespace "webhook-2233-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.412 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":21,"skipped":371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:22:05.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 8 13:22:05.128: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 8 13:22:10.132: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:22:11.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-323" for this suite. • [SLOW TEST:6.082 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":22,"skipped":422,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:22:11.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-6ac81f8c-4be4-497c-b72a-66579f6c338e STEP: Creating secret with name s-test-opt-upd-03fd62ce-5ac6-4f8f-805b-a0f6a36ce70e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6ac81f8c-4be4-497c-b72a-66579f6c338e STEP: Updating secret s-test-opt-upd-03fd62ce-5ac6-4f8f-805b-a0f6a36ce70e STEP: Creating secret with name s-test-opt-create-45f82e1c-702a-41e3-aaaf-508e7c84b5b3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:22:15.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7412" for this suite. •{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":430,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:22:15.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:22:15.890: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 13:22:17.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270535, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270535, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270535, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270535, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:22:20.921: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:22:20.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6054-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:22:22.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7212" for this suite. STEP: Destroying namespace "webhook-7212-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.840 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":24,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:22:22.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-117d0c0f-48fc-4cce-86c6-f32b7fee400f STEP: Creating a pod to test consume configMaps Mar 8 13:22:22.266: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aa149a68-007b-487e-b4bc-cbfbd89fcc26" in namespace "projected-6393" to be "success or failure" Mar 8 13:22:22.269: INFO: Pod "pod-projected-configmaps-aa149a68-007b-487e-b4bc-cbfbd89fcc26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149823ms Mar 8 13:22:24.273: INFO: Pod "pod-projected-configmaps-aa149a68-007b-487e-b4bc-cbfbd89fcc26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006141504s STEP: Saw pod success Mar 8 13:22:24.273: INFO: Pod "pod-projected-configmaps-aa149a68-007b-487e-b4bc-cbfbd89fcc26" satisfied condition "success or failure" Mar 8 13:22:24.275: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-aa149a68-007b-487e-b4bc-cbfbd89fcc26 container projected-configmap-volume-test: STEP: delete the pod Mar 8 13:22:24.309: INFO: Waiting for pod pod-projected-configmaps-aa149a68-007b-487e-b4bc-cbfbd89fcc26 to disappear Mar 8 13:22:24.317: INFO: Pod pod-projected-configmaps-aa149a68-007b-487e-b4bc-cbfbd89fcc26 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:22:24.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6393" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":469,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:22:24.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 8 13:22:24.921: INFO: Pod name wrapped-volume-race-ce579070-9aef-40fa-aeac-18dfa98c42a0: Found 0 pods out of 5 Mar 8 13:22:29.929: INFO: Pod name wrapped-volume-race-ce579070-9aef-40fa-aeac-18dfa98c42a0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ce579070-9aef-40fa-aeac-18dfa98c42a0 in namespace emptydir-wrapper-3896, will wait for the garbage collector to delete the pods Mar 8 13:22:40.024: INFO: Deleting ReplicationController wrapped-volume-race-ce579070-9aef-40fa-aeac-18dfa98c42a0 took: 18.481517ms Mar 8 13:22:40.124: INFO: Terminating ReplicationController wrapped-volume-race-ce579070-9aef-40fa-aeac-18dfa98c42a0 pods took: 100.284826ms STEP: Creating RC which spawns configmap-volume pods Mar 8 13:22:45.852: INFO: Pod name wrapped-volume-race-fec5e7d5-825a-427d-8119-53fd1196329a: Found 0 pods out of 5 Mar 8 13:22:50.871: INFO: Pod name wrapped-volume-race-fec5e7d5-825a-427d-8119-53fd1196329a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fec5e7d5-825a-427d-8119-53fd1196329a in namespace emptydir-wrapper-3896, will wait for the garbage collector to delete the pods Mar 8 13:23:00.961: INFO: Deleting ReplicationController wrapped-volume-race-fec5e7d5-825a-427d-8119-53fd1196329a took: 7.912165ms Mar 8 13:23:01.261: INFO: Terminating ReplicationController wrapped-volume-race-fec5e7d5-825a-427d-8119-53fd1196329a pods took: 300.256654ms STEP: Creating RC which spawns configmap-volume pods Mar 8 13:23:08.787: INFO: Pod name wrapped-volume-race-0f83cda8-13a9-4de2-a0b8-142b8f959263: Found 0 pods out of 5 Mar 8 13:23:13.795: INFO: Pod name wrapped-volume-race-0f83cda8-13a9-4de2-a0b8-142b8f959263: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0f83cda8-13a9-4de2-a0b8-142b8f959263 in namespace emptydir-wrapper-3896, will wait for the garbage collector to delete the pods Mar 8 13:23:23.892: INFO: Deleting ReplicationController wrapped-volume-race-0f83cda8-13a9-4de2-a0b8-142b8f959263 took: 16.064728ms Mar 8 13:23:24.192: INFO: Terminating ReplicationController wrapped-volume-race-0f83cda8-13a9-4de2-a0b8-142b8f959263 pods took: 300.263971ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:23:31.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3896" for this suite. • [SLOW TEST:67.185 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":26,"skipped":474,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:23:31.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:23:31.566: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 8 13:23:31.621: INFO: Number of nodes with available pods: 0 Mar 8 13:23:31.621: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 8 13:23:31.656: INFO: Number of nodes with available pods: 0 Mar 8 13:23:31.656: INFO: Node kind-worker2 is running more than one daemon pod Mar 8 13:23:32.663: INFO: Number of nodes with available pods: 0 Mar 8 13:23:32.663: INFO: Node kind-worker2 is running more than one daemon pod Mar 8 13:23:33.660: INFO: Number of nodes with available pods: 1 Mar 8 13:23:33.660: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 8 13:23:33.676: INFO: Number of nodes with available pods: 1 Mar 8 13:23:33.676: INFO: Number of running nodes: 0, number of available pods: 1 Mar 8 13:23:34.680: INFO: Number of nodes with available pods: 0 Mar 8 13:23:34.680: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 8 13:23:34.717: INFO: Number of nodes with available pods: 0 Mar 8 13:23:34.717: INFO: Node kind-worker2 is running more than one daemon pod Mar 8 13:23:35.721: INFO: Number of nodes with available pods: 0 Mar 8 13:23:35.721: INFO: Node kind-worker2 is running more than one daemon pod Mar 8 13:23:36.724: INFO: Number of nodes with available pods: 0 Mar 8 13:23:36.724: INFO: Node kind-worker2 is running more than one daemon pod Mar 8 13:23:37.747: INFO: Number of nodes with available pods: 0 Mar 8 13:23:37.747: INFO: Node kind-worker2 is running more than one daemon pod Mar 8 13:23:38.721: INFO: Number of nodes with available pods: 0 Mar 8 13:23:38.721: INFO: Node kind-worker2 is running more than one daemon pod Mar 8 13:23:39.721: INFO: Number of nodes with available pods: 1 Mar 8 13:23:39.721: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8817, will wait for the garbage collector to delete the pods Mar 8 13:23:39.786: INFO: Deleting DaemonSet.extensions daemon-set took: 6.168665ms Mar 8 13:23:39.886: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.230705ms Mar 8 13:23:42.990: INFO: Number of nodes with available pods: 0 Mar 8 13:23:42.990: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 13:23:42.996: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8817/daemonsets","resourceVersion":"9060"},"items":null} Mar 8 13:23:43.029: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8817/pods","resourceVersion":"9060"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:23:43.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8817" for this suite. • [SLOW TEST:11.556 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":27,"skipped":488,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:23:43.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:23:45.182: INFO: Waiting up to 5m0s for pod "client-envvars-20703b66-cdc9-432a-a3d3-4110f61e5218" in namespace "pods-348" to be "success or failure" Mar 8 13:23:45.192: INFO: Pod "client-envvars-20703b66-cdc9-432a-a3d3-4110f61e5218": Phase="Pending", Reason="", readiness=false. Elapsed: 9.981819ms Mar 8 13:23:47.195: INFO: Pod "client-envvars-20703b66-cdc9-432a-a3d3-4110f61e5218": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013480583s STEP: Saw pod success Mar 8 13:23:47.195: INFO: Pod "client-envvars-20703b66-cdc9-432a-a3d3-4110f61e5218" satisfied condition "success or failure" Mar 8 13:23:47.198: INFO: Trying to get logs from node kind-worker pod client-envvars-20703b66-cdc9-432a-a3d3-4110f61e5218 container env3cont: STEP: delete the pod Mar 8 13:23:47.228: INFO: Waiting for pod client-envvars-20703b66-cdc9-432a-a3d3-4110f61e5218 to disappear Mar 8 13:23:47.232: INFO: Pod client-envvars-20703b66-cdc9-432a-a3d3-4110f61e5218 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:23:47.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-348" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":500,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:23:47.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:23:47.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb4e77d4-c313-4397-9bae-9b53443a1a0a" in namespace "downward-api-8840" to be "success or failure" Mar 8 13:23:47.327: INFO: Pod "downwardapi-volume-bb4e77d4-c313-4397-9bae-9b53443a1a0a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.551788ms Mar 8 13:23:49.331: INFO: Pod "downwardapi-volume-bb4e77d4-c313-4397-9bae-9b53443a1a0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019762128s STEP: Saw pod success Mar 8 13:23:49.332: INFO: Pod "downwardapi-volume-bb4e77d4-c313-4397-9bae-9b53443a1a0a" satisfied condition "success or failure" Mar 8 13:23:49.335: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-bb4e77d4-c313-4397-9bae-9b53443a1a0a container client-container: STEP: delete the pod Mar 8 13:23:49.369: INFO: Waiting for pod downwardapi-volume-bb4e77d4-c313-4397-9bae-9b53443a1a0a to disappear Mar 8 13:23:49.387: INFO: Pod downwardapi-volume-bb4e77d4-c313-4397-9bae-9b53443a1a0a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:23:49.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8840" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":501,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:23:49.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-74fa3ea8-cbac-486e-a0d1-659b4599600b STEP: Creating a pod to test consume configMaps Mar 8 13:23:49.455: INFO: Waiting up to 5m0s for pod "pod-configmaps-e08666b7-362e-4f0b-bf1b-d9b74a063582" in namespace "configmap-585" to be "success or failure" Mar 8 13:23:49.459: INFO: Pod "pod-configmaps-e08666b7-362e-4f0b-bf1b-d9b74a063582": Phase="Pending", Reason="", readiness=false. Elapsed: 3.978476ms Mar 8 13:23:51.463: INFO: Pod "pod-configmaps-e08666b7-362e-4f0b-bf1b-d9b74a063582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007914226s STEP: Saw pod success Mar 8 13:23:51.463: INFO: Pod "pod-configmaps-e08666b7-362e-4f0b-bf1b-d9b74a063582" satisfied condition "success or failure" Mar 8 13:23:51.467: INFO: Trying to get logs from node kind-worker pod pod-configmaps-e08666b7-362e-4f0b-bf1b-d9b74a063582 container configmap-volume-test: STEP: delete the pod Mar 8 13:23:51.522: INFO: Waiting for pod pod-configmaps-e08666b7-362e-4f0b-bf1b-d9b74a063582 to disappear Mar 8 13:23:51.525: INFO: Pod pod-configmaps-e08666b7-362e-4f0b-bf1b-d9b74a063582 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:23:51.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-585" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":502,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:23:51.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0308 13:23:52.615369 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 13:23:52.615: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:23:52.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1067" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":31,"skipped":504,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:23:52.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:23:52.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdf2caae-ab02-41f8-8283-b24b50a1ab3e" in namespace "downward-api-3267" to be "success or failure" Mar 8 13:23:52.722: INFO: Pod "downwardapi-volume-fdf2caae-ab02-41f8-8283-b24b50a1ab3e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.178823ms Mar 8 13:23:54.725: INFO: Pod "downwardapi-volume-fdf2caae-ab02-41f8-8283-b24b50a1ab3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029017566s STEP: Saw pod success Mar 8 13:23:54.725: INFO: Pod "downwardapi-volume-fdf2caae-ab02-41f8-8283-b24b50a1ab3e" satisfied condition "success or failure" Mar 8 13:23:54.728: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-fdf2caae-ab02-41f8-8283-b24b50a1ab3e container client-container: STEP: delete the pod Mar 8 13:23:54.764: INFO: Waiting for pod downwardapi-volume-fdf2caae-ab02-41f8-8283-b24b50a1ab3e to disappear Mar 8 13:23:54.772: INFO: Pod downwardapi-volume-fdf2caae-ab02-41f8-8283-b24b50a1ab3e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:23:54.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3267" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":515,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:23:54.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 8 13:23:54.839: INFO: Created pod &Pod{ObjectMeta:{dns-4712 dns-4712 /api/v1/namespaces/dns-4712/pods/dns-4712 ac2c9ac6-cbb1-48b9-b9ba-8022bb3e2ac5 9249 0 2020-03-08 13:23:54 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vz8zf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vz8zf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vz8zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 8 13:23:58.848: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4712 PodName:dns-4712 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:23:58.848: INFO: >>> kubeConfig: /root/.kube/config I0308 13:23:58.887564 6 log.go:172] (0xc002c78790) (0xc0025aa640) Create stream I0308 13:23:58.887591 6 log.go:172] (0xc002c78790) (0xc0025aa640) Stream added, broadcasting: 1 I0308 13:23:58.890782 6 log.go:172] (0xc002c78790) Reply frame received for 1 I0308 13:23:58.890830 6 log.go:172] (0xc002c78790) (0xc001ad0000) Create stream I0308 13:23:58.890846 6 log.go:172] (0xc002c78790) (0xc001ad0000) Stream added, broadcasting: 3 I0308 13:23:58.893645 6 log.go:172] (0xc002c78790) Reply frame received for 3 I0308 13:23:58.893690 6 log.go:172] (0xc002c78790) (0xc001912780) Create stream I0308 13:23:58.893708 6 log.go:172] (0xc002c78790) (0xc001912780) Stream added, broadcasting: 5 I0308 13:23:58.894712 6 log.go:172] (0xc002c78790) Reply frame received for 5 I0308 13:23:58.968537 6 log.go:172] (0xc002c78790) Data frame received for 3 I0308 13:23:58.968558 6 log.go:172] (0xc001ad0000) (3) Data frame handling I0308 13:23:58.968571 6 log.go:172] (0xc001ad0000) (3) Data frame sent I0308 13:23:58.969546 6 log.go:172] (0xc002c78790) Data frame received for 5 I0308 13:23:58.969588 6 log.go:172] (0xc001912780) (5) Data frame handling I0308 13:23:58.969617 6 log.go:172] (0xc002c78790) Data frame received for 3 I0308 13:23:58.969631 6 log.go:172] (0xc001ad0000) (3) Data frame handling I0308 13:23:58.971088 6 log.go:172] (0xc002c78790) Data frame received for 1 I0308 13:23:58.971108 6 log.go:172] (0xc0025aa640) (1) Data frame handling I0308 13:23:58.971117 6 log.go:172] (0xc0025aa640) (1) Data frame sent I0308 13:23:58.971130 6 log.go:172] (0xc002c78790) (0xc0025aa640) Stream removed, broadcasting: 1 I0308 13:23:58.971208 6 log.go:172] (0xc002c78790) Go away received I0308 13:23:58.971379 6 log.go:172] (0xc002c78790) (0xc0025aa640) Stream removed, broadcasting: 1 I0308 13:23:58.971395 6 log.go:172] (0xc002c78790) (0xc001ad0000) Stream removed, broadcasting: 3 I0308 13:23:58.971405 6 log.go:172] (0xc002c78790) (0xc001912780) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 8 13:23:58.971: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4712 PodName:dns-4712 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:23:58.971: INFO: >>> kubeConfig: /root/.kube/config I0308 13:23:59.002935 6 log.go:172] (0xc002c78e70) (0xc0025aa960) Create stream I0308 13:23:59.002961 6 log.go:172] (0xc002c78e70) (0xc0025aa960) Stream added, broadcasting: 1 I0308 13:23:59.004843 6 log.go:172] (0xc002c78e70) Reply frame received for 1 I0308 13:23:59.004874 6 log.go:172] (0xc002c78e70) (0xc001ad00a0) Create stream I0308 13:23:59.004887 6 log.go:172] (0xc002c78e70) (0xc001ad00a0) Stream added, broadcasting: 3 I0308 13:23:59.005680 6 log.go:172] (0xc002c78e70) Reply frame received for 3 I0308 13:23:59.005706 6 log.go:172] (0xc002c78e70) (0xc001210000) Create stream I0308 13:23:59.005715 6 log.go:172] (0xc002c78e70) (0xc001210000) Stream added, broadcasting: 5 I0308 13:23:59.006550 6 log.go:172] (0xc002c78e70) Reply frame received for 5 I0308 13:23:59.073382 6 log.go:172] (0xc002c78e70) Data frame received for 3 I0308 13:23:59.073405 6 log.go:172] (0xc001ad00a0) (3) Data frame handling I0308 13:23:59.073423 6 log.go:172] (0xc001ad00a0) (3) Data frame sent I0308 13:23:59.074174 6 log.go:172] (0xc002c78e70) Data frame received for 3 I0308 13:23:59.074224 6 log.go:172] (0xc001ad00a0) (3) Data frame handling I0308 13:23:59.074251 6 log.go:172] (0xc002c78e70) Data frame received for 5 I0308 13:23:59.074264 6 log.go:172] (0xc001210000) (5) Data frame handling I0308 13:23:59.075666 6 log.go:172] (0xc002c78e70) Data frame received for 1 I0308 13:23:59.075683 6 log.go:172] (0xc0025aa960) (1) Data frame handling I0308 13:23:59.075697 6 log.go:172] (0xc0025aa960) (1) Data frame sent I0308 13:23:59.075708 6 log.go:172] (0xc002c78e70) (0xc0025aa960) Stream removed, broadcasting: 1 I0308 13:23:59.075720 6 log.go:172] (0xc002c78e70) Go away received I0308 13:23:59.075882 6 log.go:172] (0xc002c78e70) (0xc0025aa960) Stream removed, broadcasting: 1 I0308 13:23:59.075905 6 log.go:172] (0xc002c78e70) (0xc001ad00a0) Stream removed, broadcasting: 3 I0308 13:23:59.075927 6 log.go:172] (0xc002c78e70) (0xc001210000) Stream removed, broadcasting: 5 Mar 8 13:23:59.075: INFO: Deleting pod dns-4712... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:23:59.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4712" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":33,"skipped":531,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:23:59.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3411.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3411.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 13:24:11.230: INFO: DNS probes using dns-test-3d4b2b19-29f4-4a3a-af6a-82284cbae084 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3411.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3411.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 13:24:15.309: INFO: File wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:15.312: INFO: File jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:15.312: INFO: Lookups using dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 failed for: [wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local] Mar 8 13:24:20.317: INFO: File wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:20.320: INFO: File jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:20.320: INFO: Lookups using dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 failed for: [wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local] Mar 8 13:24:25.318: INFO: File wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:25.321: INFO: File jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:25.322: INFO: Lookups using dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 failed for: [wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local] Mar 8 13:24:30.317: INFO: File wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:30.321: INFO: File jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:30.321: INFO: Lookups using dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 failed for: [wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local] Mar 8 13:24:35.321: INFO: File jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local from pod dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 8 13:24:35.321: INFO: Lookups using dns-3411/dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 failed for: [jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local] Mar 8 13:24:40.328: INFO: DNS probes using dns-test-2c58be57-6540-42cc-8bdc-cabfbf247133 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3411.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3411.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3411.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3411.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 13:24:44.506: INFO: DNS probes using dns-test-db0d5ae7-268e-4df0-9866-383a67fc1b3b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:24:44.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3411" for this suite. • [SLOW TEST:45.543 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":34,"skipped":545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:24:44.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:24:44.765: INFO: (0) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 6.147305ms) Mar 8 13:24:44.769: INFO: (1) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 3.390563ms) Mar 8 13:24:44.772: INFO: (2) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 3.112112ms) Mar 8 13:24:44.775: INFO: (3) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.738332ms) Mar 8 13:24:44.777: INFO: (4) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.600111ms) Mar 8 13:24:44.780: INFO: (5) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.79536ms) Mar 8 13:24:44.783: INFO: (6) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.91019ms) Mar 8 13:24:44.786: INFO: (7) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 3.086593ms) Mar 8 13:24:44.789: INFO: (8) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 3.04343ms) Mar 8 13:24:44.792: INFO: (9) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 3.006065ms) Mar 8 13:24:44.795: INFO: (10) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.77461ms) Mar 8 13:24:44.798: INFO: (11) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.721349ms) Mar 8 13:24:44.800: INFO: (12) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.673161ms) Mar 8 13:24:44.803: INFO: (13) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.954738ms) Mar 8 13:24:44.806: INFO: (14) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.741169ms) Mar 8 13:24:44.809: INFO: (15) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.516905ms) Mar 8 13:24:44.811: INFO: (16) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.321577ms) Mar 8 13:24:44.814: INFO: (17) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 3.205546ms) Mar 8 13:24:44.817: INFO: (18) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 3.118697ms) Mar 8 13:24:44.820: INFO: (19) /api/v1/nodes/kind-worker2/proxy/logs/:
containers/
pods/
(200; 2.481929ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:24:44.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7465" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":35,"skipped":573,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:24:44.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:24:44.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d53c2d2-8dd1-4dfe-a250-f42a8e16d3ae" in namespace "projected-5933" to be "success or failure" Mar 8 13:24:44.893: INFO: Pod "downwardapi-volume-0d53c2d2-8dd1-4dfe-a250-f42a8e16d3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 1.821548ms Mar 8 13:24:46.897: INFO: Pod "downwardapi-volume-0d53c2d2-8dd1-4dfe-a250-f42a8e16d3ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00589654s STEP: Saw pod success Mar 8 13:24:46.897: INFO: Pod "downwardapi-volume-0d53c2d2-8dd1-4dfe-a250-f42a8e16d3ae" satisfied condition "success or failure" Mar 8 13:24:46.899: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-0d53c2d2-8dd1-4dfe-a250-f42a8e16d3ae container client-container: STEP: delete the pod Mar 8 13:24:46.929: INFO: Waiting for pod downwardapi-volume-0d53c2d2-8dd1-4dfe-a250-f42a8e16d3ae to disappear Mar 8 13:24:46.935: INFO: Pod downwardapi-volume-0d53c2d2-8dd1-4dfe-a250-f42a8e16d3ae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:24:46.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5933" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":584,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:24:46.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:24:47.032: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 8 13:24:47.044: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:47.060: INFO: Number of nodes with available pods: 0 Mar 8 13:24:47.060: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:24:48.065: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:48.068: INFO: Number of nodes with available pods: 0 Mar 8 13:24:48.068: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:24:49.063: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:49.066: INFO: Number of nodes with available pods: 1 Mar 8 13:24:49.066: INFO: Node kind-worker2 is running more than one daemon pod Mar 8 13:24:50.067: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:50.078: INFO: Number of nodes with available pods: 2 Mar 8 13:24:50.078: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 8 13:24:50.150: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:50.150: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:50.168: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:52.068: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:52.068: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:52.073: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:52.203: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:52.203: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:52.207: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:53.174: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:53.174: INFO: Pod daemon-set-hmmf2 is not available Mar 8 13:24:53.174: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:53.216: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:54.172: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:54.172: INFO: Pod daemon-set-hmmf2 is not available Mar 8 13:24:54.172: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:54.175: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:55.173: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:55.173: INFO: Pod daemon-set-hmmf2 is not available Mar 8 13:24:55.173: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:55.177: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:56.185: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:56.185: INFO: Pod daemon-set-hmmf2 is not available Mar 8 13:24:56.185: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:56.188: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:57.173: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:57.173: INFO: Pod daemon-set-hmmf2 is not available Mar 8 13:24:57.173: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:57.176: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:58.173: INFO: Wrong image for pod: daemon-set-hmmf2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:58.173: INFO: Pod daemon-set-hmmf2 is not available Mar 8 13:24:58.173: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:58.196: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:24:59.173: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:24:59.173: INFO: Pod daemon-set-s5m9n is not available Mar 8 13:24:59.177: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:00.175: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:00.175: INFO: Pod daemon-set-s5m9n is not available Mar 8 13:25:00.178: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:01.173: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:01.177: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:02.171: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:02.171: INFO: Pod daemon-set-nv52v is not available Mar 8 13:25:02.174: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:03.171: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:03.171: INFO: Pod daemon-set-nv52v is not available Mar 8 13:25:03.175: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:04.171: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:04.171: INFO: Pod daemon-set-nv52v is not available Mar 8 13:25:04.174: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:05.172: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:05.172: INFO: Pod daemon-set-nv52v is not available Mar 8 13:25:05.176: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:06.172: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:06.172: INFO: Pod daemon-set-nv52v is not available Mar 8 13:25:06.176: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:07.172: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:07.172: INFO: Pod daemon-set-nv52v is not available Mar 8 13:25:07.176: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:08.176: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:08.176: INFO: Pod daemon-set-nv52v is not available Mar 8 13:25:08.179: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:09.171: INFO: Wrong image for pod: daemon-set-nv52v. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 8 13:25:09.171: INFO: Pod daemon-set-nv52v is not available Mar 8 13:25:09.173: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:10.171: INFO: Pod daemon-set-5gpx7 is not available Mar 8 13:25:10.175: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 8 13:25:10.178: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:10.180: INFO: Number of nodes with available pods: 1 Mar 8 13:25:10.180: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:25:11.184: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:25:11.187: INFO: Number of nodes with available pods: 2 Mar 8 13:25:11.187: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1141, will wait for the garbage collector to delete the pods Mar 8 13:25:11.258: INFO: Deleting DaemonSet.extensions daemon-set took: 5.780033ms Mar 8 13:25:11.559: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.190037ms Mar 8 13:25:19.461: INFO: Number of nodes with available pods: 0 Mar 8 13:25:19.461: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 13:25:19.463: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1141/daemonsets","resourceVersion":"9780"},"items":null} Mar 8 13:25:19.465: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1141/pods","resourceVersion":"9780"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:25:19.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1141" for this suite. • [SLOW TEST:32.535 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":37,"skipped":630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:25:19.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1759 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1759 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1759 Mar 8 13:25:19.528: INFO: Found 0 stateful pods, waiting for 1 Mar 8 13:25:29.532: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 8 13:25:29.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1759 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 13:25:29.802: INFO: stderr: "I0308 13:25:29.717612 282 log.go:172] (0xc0000f6370) (0xc00040d540) Create stream\nI0308 13:25:29.717666 282 log.go:172] (0xc0000f6370) (0xc00040d540) Stream added, broadcasting: 1\nI0308 13:25:29.720499 282 log.go:172] (0xc0000f6370) Reply frame received for 1\nI0308 13:25:29.720537 282 log.go:172] (0xc0000f6370) (0xc000a2c000) Create stream\nI0308 13:25:29.720548 282 log.go:172] (0xc0000f6370) (0xc000a2c000) Stream added, broadcasting: 3\nI0308 13:25:29.721807 282 log.go:172] (0xc0000f6370) Reply frame received for 3\nI0308 13:25:29.721845 282 log.go:172] (0xc0000f6370) (0xc0003be000) Create stream\nI0308 13:25:29.721854 282 log.go:172] (0xc0000f6370) (0xc0003be000) Stream added, broadcasting: 5\nI0308 13:25:29.722858 282 log.go:172] (0xc0000f6370) Reply frame received for 5\nI0308 13:25:29.777715 282 log.go:172] (0xc0000f6370) Data frame received for 5\nI0308 13:25:29.777740 282 log.go:172] (0xc0003be000) (5) Data frame handling\nI0308 13:25:29.777756 282 log.go:172] (0xc0003be000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 13:25:29.797194 282 log.go:172] (0xc0000f6370) Data frame received for 3\nI0308 13:25:29.797214 282 log.go:172] (0xc000a2c000) (3) Data frame handling\nI0308 13:25:29.797227 282 log.go:172] (0xc000a2c000) (3) Data frame sent\nI0308 13:25:29.797762 282 log.go:172] (0xc0000f6370) Data frame received for 3\nI0308 13:25:29.797802 282 log.go:172] (0xc000a2c000) (3) Data frame handling\nI0308 13:25:29.797826 282 log.go:172] (0xc0000f6370) Data frame received for 5\nI0308 13:25:29.797838 282 log.go:172] (0xc0003be000) (5) Data frame handling\nI0308 13:25:29.799501 282 log.go:172] (0xc0000f6370) Data frame received for 1\nI0308 13:25:29.799515 282 log.go:172] (0xc00040d540) (1) Data frame handling\nI0308 13:25:29.799522 282 log.go:172] (0xc00040d540) (1) Data frame sent\nI0308 13:25:29.799529 282 log.go:172] (0xc0000f6370) (0xc00040d540) Stream removed, broadcasting: 1\nI0308 13:25:29.799573 282 log.go:172] (0xc0000f6370) Go away received\nI0308 13:25:29.799757 282 log.go:172] (0xc0000f6370) (0xc00040d540) Stream removed, broadcasting: 1\nI0308 13:25:29.799774 282 log.go:172] (0xc0000f6370) (0xc000a2c000) Stream removed, broadcasting: 3\nI0308 13:25:29.799783 282 log.go:172] (0xc0000f6370) (0xc0003be000) Stream removed, broadcasting: 5\n" Mar 8 13:25:29.802: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 13:25:29.802: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 13:25:29.806: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 13:25:39.810: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 13:25:39.810: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 13:25:39.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999333s Mar 8 13:25:40.827: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995277624s Mar 8 13:25:41.831: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991100738s Mar 8 13:25:42.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986804346s Mar 8 13:25:43.839: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982544902s Mar 8 13:25:44.843: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978826249s Mar 8 13:25:45.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974672111s Mar 8 13:25:46.851: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.97045622s Mar 8 13:25:47.855: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.966558068s Mar 8 13:25:48.859: INFO: Verifying statefulset ss doesn't scale past 1 for another 962.828434ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1759 Mar 8 13:25:49.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1759 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 13:25:50.098: INFO: stderr: "I0308 13:25:50.022673 303 log.go:172] (0xc0009126e0) (0xc0006fdd60) Create stream\nI0308 13:25:50.022729 303 log.go:172] (0xc0009126e0) (0xc0006fdd60) Stream added, broadcasting: 1\nI0308 13:25:50.025454 303 log.go:172] (0xc0009126e0) Reply frame received for 1\nI0308 13:25:50.025509 303 log.go:172] (0xc0009126e0) (0xc0003f54a0) Create stream\nI0308 13:25:50.025523 303 log.go:172] (0xc0009126e0) (0xc0003f54a0) Stream added, broadcasting: 3\nI0308 13:25:50.026645 303 log.go:172] (0xc0009126e0) Reply frame received for 3\nI0308 13:25:50.026673 303 log.go:172] (0xc0009126e0) (0xc0003f5540) Create stream\nI0308 13:25:50.026683 303 log.go:172] (0xc0009126e0) (0xc0003f5540) Stream added, broadcasting: 5\nI0308 13:25:50.027538 303 log.go:172] (0xc0009126e0) Reply frame received for 5\nI0308 13:25:50.093357 303 log.go:172] (0xc0009126e0) Data frame received for 3\nI0308 13:25:50.093391 303 log.go:172] (0xc0003f54a0) (3) Data frame handling\nI0308 13:25:50.093409 303 log.go:172] (0xc0003f54a0) (3) Data frame sent\nI0308 13:25:50.093480 303 log.go:172] (0xc0009126e0) Data frame received for 3\nI0308 13:25:50.093504 303 log.go:172] (0xc0003f54a0) (3) Data frame handling\nI0308 13:25:50.093529 303 log.go:172] (0xc0009126e0) Data frame received for 5\nI0308 13:25:50.093546 303 log.go:172] (0xc0003f5540) (5) Data frame handling\nI0308 13:25:50.093561 303 log.go:172] (0xc0003f5540) (5) Data frame sent\nI0308 13:25:50.093583 303 log.go:172] (0xc0009126e0) Data frame received for 5\nI0308 13:25:50.093598 303 log.go:172] (0xc0003f5540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 13:25:50.095288 303 log.go:172] (0xc0009126e0) Data frame received for 1\nI0308 13:25:50.095308 303 log.go:172] (0xc0006fdd60) (1) Data frame handling\nI0308 13:25:50.095318 303 log.go:172] (0xc0006fdd60) (1) Data frame sent\nI0308 13:25:50.095329 303 log.go:172] (0xc0009126e0) (0xc0006fdd60) Stream removed, broadcasting: 1\nI0308 13:25:50.095344 303 log.go:172] (0xc0009126e0) Go away received\nI0308 13:25:50.095673 303 log.go:172] (0xc0009126e0) (0xc0006fdd60) Stream removed, broadcasting: 1\nI0308 13:25:50.095695 303 log.go:172] (0xc0009126e0) (0xc0003f54a0) Stream removed, broadcasting: 3\nI0308 13:25:50.095705 303 log.go:172] (0xc0009126e0) (0xc0003f5540) Stream removed, broadcasting: 5\n" Mar 8 13:25:50.098: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 13:25:50.098: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 13:25:50.104: INFO: Found 1 stateful pods, waiting for 3 Mar 8 13:26:00.109: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:26:00.109: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:26:00.109: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 8 13:26:00.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1759 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 13:26:00.350: INFO: stderr: "I0308 13:26:00.286464 324 log.go:172] (0xc000534840) (0xc0008b0000) Create stream\nI0308 13:26:00.286513 324 log.go:172] (0xc000534840) (0xc0008b0000) Stream added, broadcasting: 1\nI0308 13:26:00.288675 324 log.go:172] (0xc000534840) Reply frame received for 1\nI0308 13:26:00.288718 324 log.go:172] (0xc000534840) (0xc00066bc20) Create stream\nI0308 13:26:00.288728 324 log.go:172] (0xc000534840) (0xc00066bc20) Stream added, broadcasting: 3\nI0308 13:26:00.289745 324 log.go:172] (0xc000534840) Reply frame received for 3\nI0308 13:26:00.289777 324 log.go:172] (0xc000534840) (0xc0008b00a0) Create stream\nI0308 13:26:00.289790 324 log.go:172] (0xc000534840) (0xc0008b00a0) Stream added, broadcasting: 5\nI0308 13:26:00.290670 324 log.go:172] (0xc000534840) Reply frame received for 5\nI0308 13:26:00.345970 324 log.go:172] (0xc000534840) Data frame received for 3\nI0308 13:26:00.345992 324 log.go:172] (0xc00066bc20) (3) Data frame handling\nI0308 13:26:00.346002 324 log.go:172] (0xc00066bc20) (3) Data frame sent\nI0308 13:26:00.346010 324 log.go:172] (0xc000534840) Data frame received for 3\nI0308 13:26:00.346018 324 log.go:172] (0xc00066bc20) (3) Data frame handling\nI0308 13:26:00.346042 324 log.go:172] (0xc000534840) Data frame received for 5\nI0308 13:26:00.346052 324 log.go:172] (0xc0008b00a0) (5) Data frame handling\nI0308 13:26:00.346064 324 log.go:172] (0xc0008b00a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 13:26:00.346386 324 log.go:172] (0xc000534840) Data frame received for 5\nI0308 13:26:00.346402 324 log.go:172] (0xc0008b00a0) (5) Data frame handling\nI0308 13:26:00.347628 324 log.go:172] (0xc000534840) Data frame received for 1\nI0308 13:26:00.347642 324 log.go:172] (0xc0008b0000) (1) Data frame handling\nI0308 13:26:00.347648 324 log.go:172] (0xc0008b0000) (1) Data frame sent\nI0308 13:26:00.347656 324 log.go:172] (0xc000534840) (0xc0008b0000) Stream removed, broadcasting: 1\nI0308 13:26:00.347686 324 log.go:172] (0xc000534840) Go away received\nI0308 13:26:00.347873 324 log.go:172] (0xc000534840) (0xc0008b0000) Stream removed, broadcasting: 1\nI0308 13:26:00.347883 324 log.go:172] (0xc000534840) (0xc00066bc20) Stream removed, broadcasting: 3\nI0308 13:26:00.347888 324 log.go:172] (0xc000534840) (0xc0008b00a0) Stream removed, broadcasting: 5\n" Mar 8 13:26:00.350: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 13:26:00.350: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 13:26:00.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1759 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 13:26:00.555: INFO: stderr: "I0308 13:26:00.484178 344 log.go:172] (0xc00095c0b0) (0xc0006b9c20) Create stream\nI0308 13:26:00.484229 344 log.go:172] (0xc00095c0b0) (0xc0006b9c20) Stream added, broadcasting: 1\nI0308 13:26:00.486493 344 log.go:172] (0xc00095c0b0) Reply frame received for 1\nI0308 13:26:00.486531 344 log.go:172] (0xc00095c0b0) (0xc0008ea000) Create stream\nI0308 13:26:00.486540 344 log.go:172] (0xc00095c0b0) (0xc0008ea000) Stream added, broadcasting: 3\nI0308 13:26:00.487558 344 log.go:172] (0xc00095c0b0) Reply frame received for 3\nI0308 13:26:00.487587 344 log.go:172] (0xc00095c0b0) (0xc0006b9cc0) Create stream\nI0308 13:26:00.487595 344 log.go:172] (0xc00095c0b0) (0xc0006b9cc0) Stream added, broadcasting: 5\nI0308 13:26:00.488395 344 log.go:172] (0xc00095c0b0) Reply frame received for 5\nI0308 13:26:00.533001 344 log.go:172] (0xc00095c0b0) Data frame received for 5\nI0308 13:26:00.533024 344 log.go:172] (0xc0006b9cc0) (5) Data frame handling\nI0308 13:26:00.533043 344 log.go:172] (0xc0006b9cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 13:26:00.550659 344 log.go:172] (0xc00095c0b0) Data frame received for 5\nI0308 13:26:00.550695 344 log.go:172] (0xc0006b9cc0) (5) Data frame handling\nI0308 13:26:00.550722 344 log.go:172] (0xc00095c0b0) Data frame received for 3\nI0308 13:26:00.550742 344 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0308 13:26:00.550767 344 log.go:172] (0xc0008ea000) (3) Data frame sent\nI0308 13:26:00.550777 344 log.go:172] (0xc00095c0b0) Data frame received for 3\nI0308 13:26:00.550785 344 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0308 13:26:00.552224 344 log.go:172] (0xc00095c0b0) Data frame received for 1\nI0308 13:26:00.552238 344 log.go:172] (0xc0006b9c20) (1) Data frame handling\nI0308 13:26:00.552246 344 log.go:172] (0xc0006b9c20) (1) Data frame sent\nI0308 13:26:00.552340 344 log.go:172] (0xc00095c0b0) (0xc0006b9c20) Stream removed, broadcasting: 1\nI0308 13:26:00.552404 344 log.go:172] (0xc00095c0b0) Go away received\nI0308 13:26:00.552630 344 log.go:172] (0xc00095c0b0) (0xc0006b9c20) Stream removed, broadcasting: 1\nI0308 13:26:00.552646 344 log.go:172] (0xc00095c0b0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0308 13:26:00.552653 344 log.go:172] (0xc00095c0b0) (0xc0006b9cc0) Stream removed, broadcasting: 5\n" Mar 8 13:26:00.555: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 13:26:00.555: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 13:26:00.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1759 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 13:26:00.819: INFO: stderr: "I0308 13:26:00.724782 366 log.go:172] (0xc00070c840) (0xc000968140) Create stream\nI0308 13:26:00.724831 366 log.go:172] (0xc00070c840) (0xc000968140) Stream added, broadcasting: 1\nI0308 13:26:00.727308 366 log.go:172] (0xc00070c840) Reply frame received for 1\nI0308 13:26:00.727366 366 log.go:172] (0xc00070c840) (0xc0006c3a40) Create stream\nI0308 13:26:00.727383 366 log.go:172] (0xc00070c840) (0xc0006c3a40) Stream added, broadcasting: 3\nI0308 13:26:00.728182 366 log.go:172] (0xc00070c840) Reply frame received for 3\nI0308 13:26:00.728206 366 log.go:172] (0xc00070c840) (0xc0009681e0) Create stream\nI0308 13:26:00.728214 366 log.go:172] (0xc00070c840) (0xc0009681e0) Stream added, broadcasting: 5\nI0308 13:26:00.729094 366 log.go:172] (0xc00070c840) Reply frame received for 5\nI0308 13:26:00.792439 366 log.go:172] (0xc00070c840) Data frame received for 5\nI0308 13:26:00.792462 366 log.go:172] (0xc0009681e0) (5) Data frame handling\nI0308 13:26:00.792482 366 log.go:172] (0xc0009681e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 13:26:00.814205 366 log.go:172] (0xc00070c840) Data frame received for 3\nI0308 13:26:00.814226 366 log.go:172] (0xc0006c3a40) (3) Data frame handling\nI0308 13:26:00.814245 366 log.go:172] (0xc0006c3a40) (3) Data frame sent\nI0308 13:26:00.814520 366 log.go:172] (0xc00070c840) Data frame received for 5\nI0308 13:26:00.814575 366 log.go:172] (0xc0009681e0) (5) Data frame handling\nI0308 13:26:00.814607 366 log.go:172] (0xc00070c840) Data frame received for 3\nI0308 13:26:00.814630 366 log.go:172] (0xc0006c3a40) (3) Data frame handling\nI0308 13:26:00.816350 366 log.go:172] (0xc00070c840) Data frame received for 1\nI0308 13:26:00.816374 366 log.go:172] (0xc000968140) (1) Data frame handling\nI0308 13:26:00.816392 366 log.go:172] (0xc000968140) (1) Data frame sent\nI0308 13:26:00.816410 366 log.go:172] (0xc00070c840) (0xc000968140) Stream removed, broadcasting: 1\nI0308 13:26:00.816444 366 log.go:172] (0xc00070c840) Go away received\nI0308 13:26:00.816810 366 log.go:172] (0xc00070c840) (0xc000968140) Stream removed, broadcasting: 1\nI0308 13:26:00.816830 366 log.go:172] (0xc00070c840) (0xc0006c3a40) Stream removed, broadcasting: 3\nI0308 13:26:00.816839 366 log.go:172] (0xc00070c840) (0xc0009681e0) Stream removed, broadcasting: 5\n" Mar 8 13:26:00.819: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 13:26:00.819: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 13:26:00.819: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 13:26:00.823: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 8 13:26:10.846: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 13:26:10.846: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 13:26:10.846: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 13:26:10.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999439s Mar 8 13:26:11.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992763788s Mar 8 13:26:12.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982677704s Mar 8 13:26:13.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978336725s Mar 8 13:26:14.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973996137s Mar 8 13:26:15.886: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969746621s Mar 8 13:26:16.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965458907s Mar 8 13:26:17.896: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960351622s Mar 8 13:26:18.905: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955822543s Mar 8 13:26:19.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 946.493067ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1759 Mar 8 13:26:20.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1759 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 13:26:21.154: INFO: stderr: "I0308 13:26:21.087048 386 log.go:172] (0xc000b64000) (0xc000aee000) Create stream\nI0308 13:26:21.087098 386 log.go:172] (0xc000b64000) (0xc000aee000) Stream added, broadcasting: 1\nI0308 13:26:21.089366 386 log.go:172] (0xc000b64000) Reply frame received for 1\nI0308 13:26:21.089398 386 log.go:172] (0xc000b64000) (0xc0000c14a0) Create stream\nI0308 13:26:21.089409 386 log.go:172] (0xc000b64000) (0xc0000c14a0) Stream added, broadcasting: 3\nI0308 13:26:21.090401 386 log.go:172] (0xc000b64000) Reply frame received for 3\nI0308 13:26:21.090440 386 log.go:172] (0xc000b64000) (0xc00061ba40) Create stream\nI0308 13:26:21.090450 386 log.go:172] (0xc000b64000) (0xc00061ba40) Stream added, broadcasting: 5\nI0308 13:26:21.091501 386 log.go:172] (0xc000b64000) Reply frame received for 5\nI0308 13:26:21.149814 386 log.go:172] (0xc000b64000) Data frame received for 3\nI0308 13:26:21.149926 386 log.go:172] (0xc0000c14a0) (3) Data frame handling\nI0308 13:26:21.149946 386 log.go:172] (0xc0000c14a0) (3) Data frame sent\nI0308 13:26:21.149957 386 log.go:172] (0xc000b64000) Data frame received for 3\nI0308 13:26:21.149966 386 log.go:172] (0xc0000c14a0) (3) Data frame handling\nI0308 13:26:21.150003 386 log.go:172] (0xc000b64000) Data frame received for 5\nI0308 13:26:21.150027 386 log.go:172] (0xc00061ba40) (5) Data frame handling\nI0308 13:26:21.150038 386 log.go:172] (0xc00061ba40) (5) Data frame sent\nI0308 13:26:21.150054 386 log.go:172] (0xc000b64000) Data frame received for 5\nI0308 13:26:21.150062 386 log.go:172] (0xc00061ba40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 13:26:21.151367 386 log.go:172] (0xc000b64000) Data frame received for 1\nI0308 13:26:21.151385 386 log.go:172] (0xc000aee000) (1) Data frame handling\nI0308 13:26:21.151395 386 log.go:172] (0xc000aee000) (1) Data frame sent\nI0308 13:26:21.151492 386 log.go:172] (0xc000b64000) (0xc000aee000) Stream removed, broadcasting: 1\nI0308 13:26:21.151519 386 log.go:172] (0xc000b64000) Go away received\nI0308 13:26:21.151906 386 log.go:172] (0xc000b64000) (0xc000aee000) Stream removed, broadcasting: 1\nI0308 13:26:21.151923 386 log.go:172] (0xc000b64000) (0xc0000c14a0) Stream removed, broadcasting: 3\nI0308 13:26:21.151931 386 log.go:172] (0xc000b64000) (0xc00061ba40) Stream removed, broadcasting: 5\n" Mar 8 13:26:21.154: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 13:26:21.154: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 13:26:21.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1759 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 13:26:21.340: INFO: stderr: "I0308 13:26:21.278179 408 log.go:172] (0xc0003c0f20) (0xc0006aba40) Create stream\nI0308 13:26:21.278225 408 log.go:172] (0xc0003c0f20) (0xc0006aba40) Stream added, broadcasting: 1\nI0308 13:26:21.280010 408 log.go:172] (0xc0003c0f20) Reply frame received for 1\nI0308 13:26:21.280041 408 log.go:172] (0xc0003c0f20) (0xc0007f00a0) Create stream\nI0308 13:26:21.280054 408 log.go:172] (0xc0003c0f20) (0xc0007f00a0) Stream added, broadcasting: 3\nI0308 13:26:21.280736 408 log.go:172] (0xc0003c0f20) Reply frame received for 3\nI0308 13:26:21.280785 408 log.go:172] (0xc0003c0f20) (0xc0006abc20) Create stream\nI0308 13:26:21.280796 408 log.go:172] (0xc0003c0f20) (0xc0006abc20) Stream added, broadcasting: 5\nI0308 13:26:21.281486 408 log.go:172] (0xc0003c0f20) Reply frame received for 5\nI0308 13:26:21.336197 408 log.go:172] (0xc0003c0f20) Data frame received for 3\nI0308 13:26:21.336217 408 log.go:172] (0xc0007f00a0) (3) Data frame handling\nI0308 13:26:21.336235 408 log.go:172] (0xc0007f00a0) (3) Data frame sent\nI0308 13:26:21.336248 408 log.go:172] (0xc0003c0f20) Data frame received for 3\nI0308 13:26:21.336265 408 log.go:172] (0xc0007f00a0) (3) Data frame handling\nI0308 13:26:21.336335 408 log.go:172] (0xc0003c0f20) Data frame received for 5\nI0308 13:26:21.336348 408 log.go:172] (0xc0006abc20) (5) Data frame handling\nI0308 13:26:21.336358 408 log.go:172] (0xc0006abc20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 13:26:21.336408 408 log.go:172] (0xc0003c0f20) Data frame received for 5\nI0308 13:26:21.336421 408 log.go:172] (0xc0006abc20) (5) Data frame handling\nI0308 13:26:21.337921 408 log.go:172] (0xc0003c0f20) Data frame received for 1\nI0308 13:26:21.337948 408 log.go:172] (0xc0006aba40) (1) Data frame handling\nI0308 13:26:21.337959 408 log.go:172] (0xc0006aba40) (1) Data frame sent\nI0308 13:26:21.337974 408 log.go:172] (0xc0003c0f20) (0xc0006aba40) Stream removed, broadcasting: 1\nI0308 13:26:21.337994 408 log.go:172] (0xc0003c0f20) Go away received\nI0308 13:26:21.338276 408 log.go:172] (0xc0003c0f20) (0xc0006aba40) Stream removed, broadcasting: 1\nI0308 13:26:21.338295 408 log.go:172] (0xc0003c0f20) (0xc0007f00a0) Stream removed, broadcasting: 3\nI0308 13:26:21.338304 408 log.go:172] (0xc0003c0f20) (0xc0006abc20) Stream removed, broadcasting: 5\n" Mar 8 13:26:21.341: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 13:26:21.341: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 13:26:21.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1759 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 13:26:21.516: INFO: stderr: "I0308 13:26:21.450675 429 log.go:172] (0xc000a85600) (0xc00099c5a0) Create stream\nI0308 13:26:21.450722 429 log.go:172] (0xc000a85600) (0xc00099c5a0) Stream added, broadcasting: 1\nI0308 13:26:21.454760 429 log.go:172] (0xc000a85600) Reply frame received for 1\nI0308 13:26:21.454801 429 log.go:172] (0xc000a85600) (0xc0006efb80) Create stream\nI0308 13:26:21.454810 429 log.go:172] (0xc000a85600) (0xc0006efb80) Stream added, broadcasting: 3\nI0308 13:26:21.455446 429 log.go:172] (0xc000a85600) Reply frame received for 3\nI0308 13:26:21.455469 429 log.go:172] (0xc000a85600) (0xc000636780) Create stream\nI0308 13:26:21.455479 429 log.go:172] (0xc000a85600) (0xc000636780) Stream added, broadcasting: 5\nI0308 13:26:21.456136 429 log.go:172] (0xc000a85600) Reply frame received for 5\nI0308 13:26:21.511758 429 log.go:172] (0xc000a85600) Data frame received for 5\nI0308 13:26:21.511777 429 log.go:172] (0xc000636780) (5) Data frame handling\nI0308 13:26:21.511784 429 log.go:172] (0xc000636780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 13:26:21.511797 429 log.go:172] (0xc000a85600) Data frame received for 3\nI0308 13:26:21.511819 429 log.go:172] (0xc0006efb80) (3) Data frame handling\nI0308 13:26:21.511829 429 log.go:172] (0xc0006efb80) (3) Data frame sent\nI0308 13:26:21.511836 429 log.go:172] (0xc000a85600) Data frame received for 3\nI0308 13:26:21.511841 429 log.go:172] (0xc0006efb80) (3) Data frame handling\nI0308 13:26:21.511860 429 log.go:172] (0xc000a85600) Data frame received for 5\nI0308 13:26:21.511867 429 log.go:172] (0xc000636780) (5) Data frame handling\nI0308 13:26:21.514044 429 log.go:172] (0xc000a85600) Data frame received for 1\nI0308 13:26:21.514056 429 log.go:172] (0xc00099c5a0) (1) Data frame handling\nI0308 13:26:21.514062 429 log.go:172] (0xc00099c5a0) (1) Data frame sent\nI0308 13:26:21.514158 429 log.go:172] (0xc000a85600) (0xc00099c5a0) Stream removed, broadcasting: 1\nI0308 13:26:21.514219 429 log.go:172] (0xc000a85600) Go away received\nI0308 13:26:21.514452 429 log.go:172] (0xc000a85600) (0xc00099c5a0) Stream removed, broadcasting: 1\nI0308 13:26:21.514468 429 log.go:172] (0xc000a85600) (0xc0006efb80) Stream removed, broadcasting: 3\nI0308 13:26:21.514475 429 log.go:172] (0xc000a85600) (0xc000636780) Stream removed, broadcasting: 5\n" Mar 8 13:26:21.516: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 13:26:21.516: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 13:26:21.516: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 13:26:31.528: INFO: Deleting all statefulset in ns statefulset-1759 Mar 8 13:26:31.531: INFO: Scaling statefulset ss to 0 Mar 8 13:26:31.539: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 13:26:31.541: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:26:31.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1759" for this suite. • [SLOW TEST:72.097 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":38,"skipped":656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:26:31.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0308 13:27:11.688935 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 13:27:11.688: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:11.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8853" for this suite. • [SLOW TEST:40.121 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":39,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:11.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-371 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-371 I0308 13:27:11.783635 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-371, replica count: 2 I0308 13:27:14.834049 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 13:27:14.834: INFO: Creating new exec pod Mar 8 13:27:17.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-371 execpodn7v42 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 8 13:27:18.068: INFO: stderr: "I0308 13:27:18.003927 451 log.go:172] (0xc000a00630) (0xc0005f1d60) Create stream\nI0308 13:27:18.004015 451 log.go:172] (0xc000a00630) (0xc0005f1d60) Stream added, broadcasting: 1\nI0308 13:27:18.006223 451 log.go:172] (0xc000a00630) Reply frame received for 1\nI0308 13:27:18.006264 451 log.go:172] (0xc000a00630) (0xc000574640) Create stream\nI0308 13:27:18.006275 451 log.go:172] (0xc000a00630) (0xc000574640) Stream added, broadcasting: 3\nI0308 13:27:18.007086 451 log.go:172] (0xc000a00630) Reply frame received for 3\nI0308 13:27:18.007119 451 log.go:172] (0xc000a00630) (0xc000759400) Create stream\nI0308 13:27:18.007129 451 log.go:172] (0xc000a00630) (0xc000759400) Stream added, broadcasting: 5\nI0308 13:27:18.007964 451 log.go:172] (0xc000a00630) Reply frame received for 5\nI0308 13:27:18.061571 451 log.go:172] (0xc000a00630) Data frame received for 5\nI0308 13:27:18.061598 451 log.go:172] (0xc000759400) (5) Data frame handling\nI0308 13:27:18.061620 451 log.go:172] (0xc000759400) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0308 13:27:18.062907 451 log.go:172] (0xc000a00630) Data frame received for 5\nI0308 13:27:18.062932 451 log.go:172] (0xc000759400) (5) Data frame handling\nI0308 13:27:18.062943 451 log.go:172] (0xc000759400) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0308 13:27:18.063366 451 log.go:172] (0xc000a00630) Data frame received for 3\nI0308 13:27:18.063411 451 log.go:172] (0xc000574640) (3) Data frame handling\nI0308 13:27:18.063589 451 log.go:172] (0xc000a00630) Data frame received for 5\nI0308 13:27:18.063616 451 log.go:172] (0xc000759400) (5) Data frame handling\nI0308 13:27:18.065345 451 log.go:172] (0xc000a00630) Data frame received for 1\nI0308 13:27:18.065364 451 log.go:172] (0xc0005f1d60) (1) Data frame handling\nI0308 13:27:18.065378 451 log.go:172] (0xc0005f1d60) (1) Data frame sent\nI0308 13:27:18.065388 451 log.go:172] (0xc000a00630) (0xc0005f1d60) Stream removed, broadcasting: 1\nI0308 13:27:18.065549 451 log.go:172] (0xc000a00630) Go away received\nI0308 13:27:18.065656 451 log.go:172] (0xc000a00630) (0xc0005f1d60) Stream removed, broadcasting: 1\nI0308 13:27:18.065674 451 log.go:172] (0xc000a00630) (0xc000574640) Stream removed, broadcasting: 3\nI0308 13:27:18.065683 451 log.go:172] (0xc000a00630) (0xc000759400) Stream removed, broadcasting: 5\n" Mar 8 13:27:18.069: INFO: stdout: "" Mar 8 13:27:18.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-371 execpodn7v42 -- /bin/sh -x -c nc -zv -t -w 2 10.96.72.172 80' Mar 8 13:27:18.312: INFO: stderr: "I0308 13:27:18.233238 472 log.go:172] (0xc000564d10) (0xc0006b3c20) Create stream\nI0308 13:27:18.233287 472 log.go:172] (0xc000564d10) (0xc0006b3c20) Stream added, broadcasting: 1\nI0308 13:27:18.235411 472 log.go:172] (0xc000564d10) Reply frame received for 1\nI0308 13:27:18.235451 472 log.go:172] (0xc000564d10) (0xc000a56000) Create stream\nI0308 13:27:18.235466 472 log.go:172] (0xc000564d10) (0xc000a56000) Stream added, broadcasting: 3\nI0308 13:27:18.236441 472 log.go:172] (0xc000564d10) Reply frame received for 3\nI0308 13:27:18.236465 472 log.go:172] (0xc000564d10) (0xc0006b3cc0) Create stream\nI0308 13:27:18.236473 472 log.go:172] (0xc000564d10) (0xc0006b3cc0) Stream added, broadcasting: 5\nI0308 13:27:18.237227 472 log.go:172] (0xc000564d10) Reply frame received for 5\nI0308 13:27:18.308360 472 log.go:172] (0xc000564d10) Data frame received for 3\nI0308 13:27:18.308385 472 log.go:172] (0xc000a56000) (3) Data frame handling\nI0308 13:27:18.308465 472 log.go:172] (0xc000564d10) Data frame received for 5\nI0308 13:27:18.308489 472 log.go:172] (0xc0006b3cc0) (5) Data frame handling\nI0308 13:27:18.308503 472 log.go:172] (0xc0006b3cc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.72.172 80\nConnection to 10.96.72.172 80 port [tcp/http] succeeded!\nI0308 13:27:18.308566 472 log.go:172] (0xc000564d10) Data frame received for 5\nI0308 13:27:18.308580 472 log.go:172] (0xc0006b3cc0) (5) Data frame handling\nI0308 13:27:18.309718 472 log.go:172] (0xc000564d10) Data frame received for 1\nI0308 13:27:18.309734 472 log.go:172] (0xc0006b3c20) (1) Data frame handling\nI0308 13:27:18.309741 472 log.go:172] (0xc0006b3c20) (1) Data frame sent\nI0308 13:27:18.309754 472 log.go:172] (0xc000564d10) (0xc0006b3c20) Stream removed, broadcasting: 1\nI0308 13:27:18.309792 472 log.go:172] (0xc000564d10) Go away received\nI0308 13:27:18.310041 472 log.go:172] (0xc000564d10) (0xc0006b3c20) Stream removed, broadcasting: 1\nI0308 13:27:18.310054 472 log.go:172] (0xc000564d10) (0xc000a56000) Stream removed, broadcasting: 3\nI0308 13:27:18.310062 472 log.go:172] (0xc000564d10) (0xc0006b3cc0) Stream removed, broadcasting: 5\n" Mar 8 13:27:18.312: INFO: stdout: "" Mar 8 13:27:18.312: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:18.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-371" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.646 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":40,"skipped":735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:18.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 8 13:27:20.420: INFO: Pod pod-hostip-dde88ab9-f69e-4797-8fe9-b858d5c32caa has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:20.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6729" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":764,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:20.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 8 13:27:20.479: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix203154901/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:20.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5764" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":42,"skipped":776,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:20.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3d14e95e-77b9-4124-8d0b-2633f71eda95 STEP: Creating a pod to test consume configMaps Mar 8 13:27:20.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-21ad9d87-0f21-4355-9a58-aae9c5d0279c" in namespace "configmap-3397" to be "success or failure" Mar 8 13:27:20.763: INFO: Pod "pod-configmaps-21ad9d87-0f21-4355-9a58-aae9c5d0279c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11769ms Mar 8 13:27:22.774: INFO: Pod "pod-configmaps-21ad9d87-0f21-4355-9a58-aae9c5d0279c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033043286s STEP: Saw pod success Mar 8 13:27:22.774: INFO: Pod "pod-configmaps-21ad9d87-0f21-4355-9a58-aae9c5d0279c" satisfied condition "success or failure" Mar 8 13:27:22.776: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-21ad9d87-0f21-4355-9a58-aae9c5d0279c container configmap-volume-test: STEP: delete the pod Mar 8 13:27:22.814: INFO: Waiting for pod pod-configmaps-21ad9d87-0f21-4355-9a58-aae9c5d0279c to disappear Mar 8 13:27:22.818: INFO: Pod pod-configmaps-21ad9d87-0f21-4355-9a58-aae9c5d0279c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:22.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3397" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":785,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:22.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:27:23.379: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 13:27:25.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270843, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270843, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270843, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270843, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:27:28.412: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:28.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-903" for this suite. STEP: Destroying namespace "webhook-903-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.749 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":44,"skipped":786,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:28.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-73.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-73.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 13:27:30.730: INFO: DNS probes using dns-73/dns-test-5da3d722-0efd-49c4-b8e3-ec2258f8ed7a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:30.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-73" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":45,"skipped":788,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:30.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e14ee460-3ec7-489a-8e89-70e5256b474e STEP: Creating a pod to test consume secrets Mar 8 13:27:30.894: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-144df002-9645-4d54-8ee8-cbe3b8742674" in namespace "projected-1768" to be "success or failure" Mar 8 13:27:30.918: INFO: Pod "pod-projected-secrets-144df002-9645-4d54-8ee8-cbe3b8742674": Phase="Pending", Reason="", readiness=false. Elapsed: 24.585976ms Mar 8 13:27:32.921: INFO: Pod "pod-projected-secrets-144df002-9645-4d54-8ee8-cbe3b8742674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027494906s STEP: Saw pod success Mar 8 13:27:32.921: INFO: Pod "pod-projected-secrets-144df002-9645-4d54-8ee8-cbe3b8742674" satisfied condition "success or failure" Mar 8 13:27:32.923: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-144df002-9645-4d54-8ee8-cbe3b8742674 container projected-secret-volume-test: STEP: delete the pod Mar 8 13:27:32.948: INFO: Waiting for pod pod-projected-secrets-144df002-9645-4d54-8ee8-cbe3b8742674 to disappear Mar 8 13:27:32.957: INFO: Pod pod-projected-secrets-144df002-9645-4d54-8ee8-cbe3b8742674 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:32.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1768" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":798,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:32.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:35.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-830" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":803,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:35.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 13:27:41.389: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 13:27:41.409: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 13:27:43.410: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 13:27:43.414: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 13:27:45.410: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 13:27:45.414: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 13:27:47.410: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 13:27:47.414: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 13:27:49.410: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 13:27:49.414: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:49.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5846" for this suite. • [SLOW TEST:14.180 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":813,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:49.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:27:49.497: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:50.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2946" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":49,"skipped":824,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:50.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:27:51.545: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 13:27:53.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270871, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270871, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270871, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719270871, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:27:56.590: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:56.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9186" for this suite. STEP: Destroying namespace "webhook-9186-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.974 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":50,"skipped":835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:56.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:27:56.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9551' Mar 8 13:27:57.095: INFO: stderr: "" Mar 8 13:27:57.095: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 8 13:27:57.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9551' Mar 8 13:27:57.311: INFO: stderr: "" Mar 8 13:27:57.311: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 13:27:58.315: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 13:27:58.315: INFO: Found 0 / 1 Mar 8 13:27:59.315: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 13:27:59.315: INFO: Found 1 / 1 Mar 8 13:27:59.315: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 13:27:59.318: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 13:27:59.318: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 13:27:59.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-kgvkd --namespace=kubectl-9551' Mar 8 13:27:59.466: INFO: stderr: "" Mar 8 13:27:59.466: INFO: stdout: "Name: agnhost-master-kgvkd\nNamespace: kubectl-9551\nPriority: 0\nNode: kind-worker2/172.17.0.5\nStart Time: Sun, 08 Mar 2020 13:27:57 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.70\nIPs:\n IP: 10.244.1.70\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://bb31c3e52fc1f1df6606e45b5a61db0d98fe4d1667c191cbd4d93c94c838337f\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 08 Mar 2020 13:27:58 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ndkzq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ndkzq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ndkzq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-9551/agnhost-master-kgvkd to kind-worker2\n Normal Pulled 2s kubelet, kind-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, kind-worker2 Created container agnhost-master\n Normal Started 1s kubelet, kind-worker2 Started container agnhost-master\n" Mar 8 13:27:59.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9551' Mar 8 13:27:59.597: INFO: stderr: "" Mar 8 13:27:59.597: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9551\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-kgvkd\n" Mar 8 13:27:59.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9551' Mar 8 13:27:59.728: INFO: stderr: "" Mar 8 13:27:59.729: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9551\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.60.157\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.70:6379\nSession Affinity: None\nEvents: \n" Mar 8 13:27:59.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node kind-control-plane' Mar 8 13:27:59.864: INFO: stderr: "" Mar 8 13:27:59.864: INFO: stdout: "Name: kind-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kind-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 12:58:15 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kind-control-plane\n AcquireTime: \n RenewTime: Sun, 08 Mar 2020 13:27:50 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 08 Mar 2020 13:23:40 +0000 Sun, 08 Mar 2020 12:58:09 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 08 Mar 2020 13:23:40 +0000 Sun, 08 Mar 2020 12:58:09 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 08 Mar 2020 13:23:40 +0000 Sun, 08 Mar 2020 12:58:09 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 08 Mar 2020 13:23:40 +0000 Sun, 08 Mar 2020 12:58:39 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: kind-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 2933ba1eefe849398e3d689ac3f8c18d\n System UUID: 4fc5801b-e912-42d9-a033-1413f04d9cad\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-b6x6n 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 29m\n kube-system coredns-6955765f44-q8m7h 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 29m\n kube-system etcd-kind-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m\n kube-system kindnet-pwjr9 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 29m\n kube-system kube-apiserver-kind-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 29m\n kube-system kube-controller-manager-kind-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 29m\n kube-system kube-proxy-qwvq4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m\n kube-system kube-scheduler-kind-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 29m\n local-path-storage local-path-provisioner-7745554f7f-9j4gn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal NodeHasSufficientMemory 29m (x6 over 29m) kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 29m (x5 over 29m) kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 29m (x5 over 29m) kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasSufficientPID\n Normal Starting 29m kubelet, kind-control-plane Starting kubelet.\n Normal NodeHasSufficientMemory 29m kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 29m kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 29m kubelet, kind-control-plane Node kind-control-plane status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 29m kubelet, kind-control-plane Updated Node Allocatable limit across pods\n Warning readOnlySysFS 29m kube-proxy, kind-control-plane CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)\n Normal Starting 29m kube-proxy, kind-control-plane Starting kube-proxy.\n Normal NodeReady 29m kubelet, kind-control-plane Node kind-control-plane status is now: NodeReady\n" Mar 8 13:27:59.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9551' Mar 8 13:27:59.961: INFO: stderr: "" Mar 8 13:27:59.962: INFO: stdout: "Name: kubectl-9551\nLabels: e2e-framework=kubectl\n e2e-run=9dbdea0e-215a-4859-904d-cbfe7e9b5fdf\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:27:59.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9551" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":51,"skipped":860,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:27:59.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 8 13:28:00.010: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:28:06.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5922" for this suite. • [SLOW TEST:6.664 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":52,"skipped":873,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:28:06.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-cdf1b658-2972-4870-bebc-cb6a0cf9362f STEP: Creating a pod to test consume configMaps Mar 8 13:28:06.719: INFO: Waiting up to 5m0s for pod "pod-configmaps-6542506e-8651-4a0b-8f04-ac54ece6d168" in namespace "configmap-4288" to be "success or failure" Mar 8 13:28:06.735: INFO: Pod "pod-configmaps-6542506e-8651-4a0b-8f04-ac54ece6d168": Phase="Pending", Reason="", readiness=false. Elapsed: 15.560549ms Mar 8 13:28:08.741: INFO: Pod "pod-configmaps-6542506e-8651-4a0b-8f04-ac54ece6d168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021246949s Mar 8 13:28:10.787: INFO: Pod "pod-configmaps-6542506e-8651-4a0b-8f04-ac54ece6d168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067466631s STEP: Saw pod success Mar 8 13:28:10.787: INFO: Pod "pod-configmaps-6542506e-8651-4a0b-8f04-ac54ece6d168" satisfied condition "success or failure" Mar 8 13:28:10.790: INFO: Trying to get logs from node kind-worker pod pod-configmaps-6542506e-8651-4a0b-8f04-ac54ece6d168 container configmap-volume-test: STEP: delete the pod Mar 8 13:28:10.803: INFO: Waiting for pod pod-configmaps-6542506e-8651-4a0b-8f04-ac54ece6d168 to disappear Mar 8 13:28:10.808: INFO: Pod pod-configmaps-6542506e-8651-4a0b-8f04-ac54ece6d168 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:28:10.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4288" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":890,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:28:10.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:28:11.508: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:28:14.528: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:28:14.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3741" for this suite. STEP: Destroying namespace "webhook-3741-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":54,"skipped":912,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:28:14.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 8 13:28:14.751: INFO: Waiting up to 5m0s for pod "var-expansion-7a2870d4-600d-44dd-8dbd-cf407c4accb4" in namespace "var-expansion-5478" to be "success or failure" Mar 8 13:28:14.755: INFO: Pod "var-expansion-7a2870d4-600d-44dd-8dbd-cf407c4accb4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.876446ms Mar 8 13:28:16.759: INFO: Pod "var-expansion-7a2870d4-600d-44dd-8dbd-cf407c4accb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00756131s Mar 8 13:28:18.763: INFO: Pod "var-expansion-7a2870d4-600d-44dd-8dbd-cf407c4accb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011905989s STEP: Saw pod success Mar 8 13:28:18.763: INFO: Pod "var-expansion-7a2870d4-600d-44dd-8dbd-cf407c4accb4" satisfied condition "success or failure" Mar 8 13:28:18.766: INFO: Trying to get logs from node kind-worker pod var-expansion-7a2870d4-600d-44dd-8dbd-cf407c4accb4 container dapi-container: STEP: delete the pod Mar 8 13:28:18.781: INFO: Waiting for pod var-expansion-7a2870d4-600d-44dd-8dbd-cf407c4accb4 to disappear Mar 8 13:28:18.797: INFO: Pod var-expansion-7a2870d4-600d-44dd-8dbd-cf407c4accb4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:28:18.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5478" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":925,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:28:18.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-3afff697-7e22-4316-aed6-1f9be60d56a5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:28:20.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5719" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":984,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:28:20.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 8 13:28:23.488: INFO: Successfully updated pod "annotationupdatefe058862-2d82-45da-882e-7f08b89ce9ee" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:28:25.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6098" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1000,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:28:25.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:28:25.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36d2ad00-f599-4050-8ae2-e5b1e018cac2" in namespace "projected-4964" to be "success or failure" Mar 8 13:28:25.580: INFO: Pod "downwardapi-volume-36d2ad00-f599-4050-8ae2-e5b1e018cac2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.643807ms Mar 8 13:28:27.584: INFO: Pod "downwardapi-volume-36d2ad00-f599-4050-8ae2-e5b1e018cac2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006724331s STEP: Saw pod success Mar 8 13:28:27.584: INFO: Pod "downwardapi-volume-36d2ad00-f599-4050-8ae2-e5b1e018cac2" satisfied condition "success or failure" Mar 8 13:28:27.591: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-36d2ad00-f599-4050-8ae2-e5b1e018cac2 container client-container: STEP: delete the pod Mar 8 13:28:27.607: INFO: Waiting for pod downwardapi-volume-36d2ad00-f599-4050-8ae2-e5b1e018cac2 to disappear Mar 8 13:28:27.612: INFO: Pod downwardapi-volume-36d2ad00-f599-4050-8ae2-e5b1e018cac2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:28:27.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4964" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1013,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:28:27.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 13:28:27.710: INFO: Waiting up to 5m0s for pod "pod-282c26d0-743e-4c4c-aed6-a6f63cdff860" in namespace "emptydir-9783" to be "success or failure" Mar 8 13:28:27.719: INFO: Pod "pod-282c26d0-743e-4c4c-aed6-a6f63cdff860": Phase="Pending", Reason="", readiness=false. Elapsed: 9.523731ms Mar 8 13:28:29.723: INFO: Pod "pod-282c26d0-743e-4c4c-aed6-a6f63cdff860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012943766s STEP: Saw pod success Mar 8 13:28:29.723: INFO: Pod "pod-282c26d0-743e-4c4c-aed6-a6f63cdff860" satisfied condition "success or failure" Mar 8 13:28:29.726: INFO: Trying to get logs from node kind-worker2 pod pod-282c26d0-743e-4c4c-aed6-a6f63cdff860 container test-container: STEP: delete the pod Mar 8 13:28:29.755: INFO: Waiting for pod pod-282c26d0-743e-4c4c-aed6-a6f63cdff860 to disappear Mar 8 13:28:29.761: INFO: Pod pod-282c26d0-743e-4c4c-aed6-a6f63cdff860 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:28:29.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9783" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1013,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:28:29.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 8 13:28:29.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3842' Mar 8 13:28:30.094: INFO: stderr: "" Mar 8 13:28:30.094: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 13:28:30.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3842' Mar 8 13:28:30.216: INFO: stderr: "" Mar 8 13:28:30.216: INFO: stdout: "update-demo-nautilus-twhc5 update-demo-nautilus-z2qwk " Mar 8 13:28:30.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-twhc5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:28:30.304: INFO: stderr: "" Mar 8 13:28:30.304: INFO: stdout: "" Mar 8 13:28:30.304: INFO: update-demo-nautilus-twhc5 is created but not running Mar 8 13:28:35.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3842' Mar 8 13:28:35.398: INFO: stderr: "" Mar 8 13:28:35.398: INFO: stdout: "update-demo-nautilus-twhc5 update-demo-nautilus-z2qwk " Mar 8 13:28:35.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-twhc5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:28:35.502: INFO: stderr: "" Mar 8 13:28:35.502: INFO: stdout: "true" Mar 8 13:28:35.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-twhc5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:28:35.606: INFO: stderr: "" Mar 8 13:28:35.606: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:28:35.606: INFO: validating pod update-demo-nautilus-twhc5 Mar 8 13:28:35.610: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:28:35.610: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:28:35.610: INFO: update-demo-nautilus-twhc5 is verified up and running Mar 8 13:28:35.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z2qwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:28:35.727: INFO: stderr: "" Mar 8 13:28:35.727: INFO: stdout: "true" Mar 8 13:28:35.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z2qwk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:28:35.843: INFO: stderr: "" Mar 8 13:28:35.843: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:28:35.843: INFO: validating pod update-demo-nautilus-z2qwk Mar 8 13:28:35.847: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:28:35.847: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:28:35.847: INFO: update-demo-nautilus-z2qwk is verified up and running STEP: rolling-update to new replication controller Mar 8 13:28:35.849: INFO: scanned /root for discovery docs: Mar 8 13:28:35.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3842' Mar 8 13:28:58.360: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 13:28:58.360: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 13:28:58.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3842' Mar 8 13:28:58.491: INFO: stderr: "" Mar 8 13:28:58.491: INFO: stdout: "update-demo-kitten-kl6n6 update-demo-kitten-r9qwz update-demo-nautilus-z2qwk " STEP: Replicas for name=update-demo: expected=2 actual=3 Mar 8 13:29:03.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3842' Mar 8 13:29:03.622: INFO: stderr: "" Mar 8 13:29:03.622: INFO: stdout: "update-demo-kitten-kl6n6 update-demo-kitten-r9qwz " Mar 8 13:29:03.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kl6n6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:29:03.725: INFO: stderr: "" Mar 8 13:29:03.725: INFO: stdout: "true" Mar 8 13:29:03.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kl6n6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:29:03.834: INFO: stderr: "" Mar 8 13:29:03.834: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 8 13:29:03.834: INFO: validating pod update-demo-kitten-kl6n6 Mar 8 13:29:03.839: INFO: got data: { "image": "kitten.jpg" } Mar 8 13:29:03.839: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 8 13:29:03.839: INFO: update-demo-kitten-kl6n6 is verified up and running Mar 8 13:29:03.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r9qwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:29:03.943: INFO: stderr: "" Mar 8 13:29:03.943: INFO: stdout: "true" Mar 8 13:29:03.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r9qwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3842' Mar 8 13:29:04.055: INFO: stderr: "" Mar 8 13:29:04.055: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 8 13:29:04.055: INFO: validating pod update-demo-kitten-r9qwz Mar 8 13:29:04.058: INFO: got data: { "image": "kitten.jpg" } Mar 8 13:29:04.058: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 8 13:29:04.058: INFO: update-demo-kitten-r9qwz is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:29:04.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3842" for this suite. • [SLOW TEST:34.295 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":60,"skipped":1018,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:29:04.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 8 13:29:04.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 8 13:29:04.344: INFO: stderr: "" Mar 8 13:29:04.344: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:29:04.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7223" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":61,"skipped":1019,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:29:04.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 13:29:04.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5817' Mar 8 13:29:04.539: INFO: stderr: "" Mar 8 13:29:04.539: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846 Mar 8 13:29:04.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5817' Mar 8 13:29:19.455: INFO: stderr: "" Mar 8 13:29:19.455: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:29:19.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5817" for this suite. • [SLOW TEST:15.110 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":62,"skipped":1028,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:29:19.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:29:19.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5505" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":63,"skipped":1036,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:29:19.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-21824211-d872-44cd-a362-606865d72a12 STEP: Creating a pod to test consume configMaps Mar 8 13:29:19.596: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5030a8c8-9553-4568-9724-b1d66a348af6" in namespace "projected-8947" to be "success or failure" Mar 8 13:29:19.610: INFO: Pod "pod-projected-configmaps-5030a8c8-9553-4568-9724-b1d66a348af6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.631625ms Mar 8 13:29:21.614: INFO: Pod "pod-projected-configmaps-5030a8c8-9553-4568-9724-b1d66a348af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017577713s STEP: Saw pod success Mar 8 13:29:21.614: INFO: Pod "pod-projected-configmaps-5030a8c8-9553-4568-9724-b1d66a348af6" satisfied condition "success or failure" Mar 8 13:29:21.617: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-5030a8c8-9553-4568-9724-b1d66a348af6 container projected-configmap-volume-test: STEP: delete the pod Mar 8 13:29:21.669: INFO: Waiting for pod pod-projected-configmaps-5030a8c8-9553-4568-9724-b1d66a348af6 to disappear Mar 8 13:29:21.673: INFO: Pod pod-projected-configmaps-5030a8c8-9553-4568-9724-b1d66a348af6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:29:21.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8947" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1044,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:29:21.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:29:22.295: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:29:25.329: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:29:35.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8925" for this suite. STEP: Destroying namespace "webhook-8925-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.947 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":65,"skipped":1048,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:29:35.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:29:35.674: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:29:36.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3588" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":66,"skipped":1071,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:29:36.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9125 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 8 13:29:36.332: INFO: Found 0 stateful pods, waiting for 3 Mar 8 13:29:46.336: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:29:46.336: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:29:46.336: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:29:46.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9125 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 13:29:46.600: INFO: stderr: "I0308 13:29:46.518674 1028 log.go:172] (0xc000bd73f0) (0xc000ae46e0) Create stream\nI0308 13:29:46.518741 1028 log.go:172] (0xc000bd73f0) (0xc000ae46e0) Stream added, broadcasting: 1\nI0308 13:29:46.525140 1028 log.go:172] (0xc000bd73f0) Reply frame received for 1\nI0308 13:29:46.525183 1028 log.go:172] (0xc000bd73f0) (0xc00067a640) Create stream\nI0308 13:29:46.525196 1028 log.go:172] (0xc000bd73f0) (0xc00067a640) Stream added, broadcasting: 3\nI0308 13:29:46.526433 1028 log.go:172] (0xc000bd73f0) Reply frame received for 3\nI0308 13:29:46.526471 1028 log.go:172] (0xc000bd73f0) (0xc000483400) Create stream\nI0308 13:29:46.526484 1028 log.go:172] (0xc000bd73f0) (0xc000483400) Stream added, broadcasting: 5\nI0308 13:29:46.527604 1028 log.go:172] (0xc000bd73f0) Reply frame received for 5\nI0308 13:29:46.569286 1028 log.go:172] (0xc000bd73f0) Data frame received for 5\nI0308 13:29:46.569304 1028 log.go:172] (0xc000483400) (5) Data frame handling\nI0308 13:29:46.569324 1028 log.go:172] (0xc000483400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 13:29:46.594224 1028 log.go:172] (0xc000bd73f0) Data frame received for 3\nI0308 13:29:46.594243 1028 log.go:172] (0xc00067a640) (3) Data frame handling\nI0308 13:29:46.594264 1028 log.go:172] (0xc00067a640) (3) Data frame sent\nI0308 13:29:46.594596 1028 log.go:172] (0xc000bd73f0) Data frame received for 3\nI0308 13:29:46.594625 1028 log.go:172] (0xc00067a640) (3) Data frame handling\nI0308 13:29:46.594810 1028 log.go:172] (0xc000bd73f0) Data frame received for 5\nI0308 13:29:46.594826 1028 log.go:172] (0xc000483400) (5) Data frame handling\nI0308 13:29:46.596594 1028 log.go:172] (0xc000bd73f0) Data frame received for 1\nI0308 13:29:46.596613 1028 log.go:172] (0xc000ae46e0) (1) Data frame handling\nI0308 13:29:46.596638 1028 log.go:172] (0xc000ae46e0) (1) Data frame sent\nI0308 13:29:46.596651 1028 log.go:172] (0xc000bd73f0) (0xc000ae46e0) Stream removed, broadcasting: 1\nI0308 13:29:46.596669 1028 log.go:172] (0xc000bd73f0) Go away received\nI0308 13:29:46.597042 1028 log.go:172] (0xc000bd73f0) (0xc000ae46e0) Stream removed, broadcasting: 1\nI0308 13:29:46.597065 1028 log.go:172] (0xc000bd73f0) (0xc00067a640) Stream removed, broadcasting: 3\nI0308 13:29:46.597080 1028 log.go:172] (0xc000bd73f0) (0xc000483400) Stream removed, broadcasting: 5\n" Mar 8 13:29:46.600: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 13:29:46.600: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 13:29:56.629: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 8 13:30:06.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9125 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 13:30:06.898: INFO: stderr: "I0308 13:30:06.826723 1048 log.go:172] (0xc0003d8dc0) (0xc000647c20) Create stream\nI0308 13:30:06.826782 1048 log.go:172] (0xc0003d8dc0) (0xc000647c20) Stream added, broadcasting: 1\nI0308 13:30:06.831350 1048 log.go:172] (0xc0003d8dc0) Reply frame received for 1\nI0308 13:30:06.831392 1048 log.go:172] (0xc0003d8dc0) (0xc0009b0000) Create stream\nI0308 13:30:06.831404 1048 log.go:172] (0xc0003d8dc0) (0xc0009b0000) Stream added, broadcasting: 3\nI0308 13:30:06.833344 1048 log.go:172] (0xc0003d8dc0) Reply frame received for 3\nI0308 13:30:06.833390 1048 log.go:172] (0xc0003d8dc0) (0xc000134000) Create stream\nI0308 13:30:06.833410 1048 log.go:172] (0xc0003d8dc0) (0xc000134000) Stream added, broadcasting: 5\nI0308 13:30:06.834402 1048 log.go:172] (0xc0003d8dc0) Reply frame received for 5\nI0308 13:30:06.893259 1048 log.go:172] (0xc0003d8dc0) Data frame received for 3\nI0308 13:30:06.893292 1048 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0308 13:30:06.893318 1048 log.go:172] (0xc0009b0000) (3) Data frame sent\nI0308 13:30:06.893399 1048 log.go:172] (0xc0003d8dc0) Data frame received for 3\nI0308 13:30:06.893422 1048 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0308 13:30:06.893476 1048 log.go:172] (0xc0003d8dc0) Data frame received for 5\nI0308 13:30:06.893500 1048 log.go:172] (0xc000134000) (5) Data frame handling\nI0308 13:30:06.893523 1048 log.go:172] (0xc000134000) (5) Data frame sent\nI0308 13:30:06.893544 1048 log.go:172] (0xc0003d8dc0) Data frame received for 5\nI0308 13:30:06.893557 1048 log.go:172] (0xc000134000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 13:30:06.894986 1048 log.go:172] (0xc0003d8dc0) Data frame received for 1\nI0308 13:30:06.895013 1048 log.go:172] (0xc000647c20) (1) Data frame handling\nI0308 13:30:06.895029 1048 log.go:172] (0xc000647c20) (1) Data frame sent\nI0308 13:30:06.895046 1048 log.go:172] (0xc0003d8dc0) (0xc000647c20) Stream removed, broadcasting: 1\nI0308 13:30:06.895063 1048 log.go:172] (0xc0003d8dc0) Go away received\nI0308 13:30:06.895483 1048 log.go:172] (0xc0003d8dc0) (0xc000647c20) Stream removed, broadcasting: 1\nI0308 13:30:06.895503 1048 log.go:172] (0xc0003d8dc0) (0xc0009b0000) Stream removed, broadcasting: 3\nI0308 13:30:06.895514 1048 log.go:172] (0xc0003d8dc0) (0xc000134000) Stream removed, broadcasting: 5\n" Mar 8 13:30:06.898: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 13:30:06.898: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 13:30:16.918: INFO: Waiting for StatefulSet statefulset-9125/ss2 to complete update Mar 8 13:30:16.918: INFO: Waiting for Pod statefulset-9125/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 13:30:16.918: INFO: Waiting for Pod statefulset-9125/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 13:30:26.926: INFO: Waiting for StatefulSet statefulset-9125/ss2 to complete update Mar 8 13:30:26.926: INFO: Waiting for Pod statefulset-9125/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 13:30:26.926: INFO: Waiting for Pod statefulset-9125/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 13:30:36.925: INFO: Waiting for StatefulSet statefulset-9125/ss2 to complete update Mar 8 13:30:36.926: INFO: Waiting for Pod statefulset-9125/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 8 13:30:46.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9125 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 13:30:48.799: INFO: stderr: "I0308 13:30:48.712428 1070 log.go:172] (0xc0005451e0) (0xc0007fbc20) Create stream\nI0308 13:30:48.712463 1070 log.go:172] (0xc0005451e0) (0xc0007fbc20) Stream added, broadcasting: 1\nI0308 13:30:48.715900 1070 log.go:172] (0xc0005451e0) Reply frame received for 1\nI0308 13:30:48.715946 1070 log.go:172] (0xc0005451e0) (0xc00066c000) Create stream\nI0308 13:30:48.715960 1070 log.go:172] (0xc0005451e0) (0xc00066c000) Stream added, broadcasting: 3\nI0308 13:30:48.716869 1070 log.go:172] (0xc0005451e0) Reply frame received for 3\nI0308 13:30:48.716903 1070 log.go:172] (0xc0005451e0) (0xc00066e000) Create stream\nI0308 13:30:48.716914 1070 log.go:172] (0xc0005451e0) (0xc00066e000) Stream added, broadcasting: 5\nI0308 13:30:48.717899 1070 log.go:172] (0xc0005451e0) Reply frame received for 5\nI0308 13:30:48.769777 1070 log.go:172] (0xc0005451e0) Data frame received for 5\nI0308 13:30:48.769798 1070 log.go:172] (0xc00066e000) (5) Data frame handling\nI0308 13:30:48.769814 1070 log.go:172] (0xc00066e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 13:30:48.793376 1070 log.go:172] (0xc0005451e0) Data frame received for 3\nI0308 13:30:48.793400 1070 log.go:172] (0xc00066c000) (3) Data frame handling\nI0308 13:30:48.793428 1070 log.go:172] (0xc00066c000) (3) Data frame sent\nI0308 13:30:48.793451 1070 log.go:172] (0xc0005451e0) Data frame received for 3\nI0308 13:30:48.793478 1070 log.go:172] (0xc00066c000) (3) Data frame handling\nI0308 13:30:48.793765 1070 log.go:172] (0xc0005451e0) Data frame received for 5\nI0308 13:30:48.793784 1070 log.go:172] (0xc00066e000) (5) Data frame handling\nI0308 13:30:48.795582 1070 log.go:172] (0xc0005451e0) Data frame received for 1\nI0308 13:30:48.795614 1070 log.go:172] (0xc0007fbc20) (1) Data frame handling\nI0308 13:30:48.795631 1070 log.go:172] (0xc0007fbc20) (1) Data frame sent\nI0308 13:30:48.795653 1070 log.go:172] (0xc0005451e0) (0xc0007fbc20) Stream removed, broadcasting: 1\nI0308 13:30:48.795674 1070 log.go:172] (0xc0005451e0) Go away received\nI0308 13:30:48.796182 1070 log.go:172] (0xc0005451e0) (0xc0007fbc20) Stream removed, broadcasting: 1\nI0308 13:30:48.796209 1070 log.go:172] (0xc0005451e0) (0xc00066c000) Stream removed, broadcasting: 3\nI0308 13:30:48.796220 1070 log.go:172] (0xc0005451e0) (0xc00066e000) Stream removed, broadcasting: 5\n" Mar 8 13:30:48.799: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 13:30:48.799: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 13:30:58.830: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 8 13:31:08.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9125 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 13:31:09.110: INFO: stderr: "I0308 13:31:09.042374 1105 log.go:172] (0xc000515c30) (0xc000a20640) Create stream\nI0308 13:31:09.042429 1105 log.go:172] (0xc000515c30) (0xc000a20640) Stream added, broadcasting: 1\nI0308 13:31:09.046097 1105 log.go:172] (0xc000515c30) Reply frame received for 1\nI0308 13:31:09.046174 1105 log.go:172] (0xc000515c30) (0xc000a20000) Create stream\nI0308 13:31:09.046192 1105 log.go:172] (0xc000515c30) (0xc000a20000) Stream added, broadcasting: 3\nI0308 13:31:09.047294 1105 log.go:172] (0xc000515c30) Reply frame received for 3\nI0308 13:31:09.047338 1105 log.go:172] (0xc000515c30) (0xc000700640) Create stream\nI0308 13:31:09.047352 1105 log.go:172] (0xc000515c30) (0xc000700640) Stream added, broadcasting: 5\nI0308 13:31:09.048457 1105 log.go:172] (0xc000515c30) Reply frame received for 5\nI0308 13:31:09.105651 1105 log.go:172] (0xc000515c30) Data frame received for 5\nI0308 13:31:09.105676 1105 log.go:172] (0xc000700640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 13:31:09.105724 1105 log.go:172] (0xc000515c30) Data frame received for 3\nI0308 13:31:09.105768 1105 log.go:172] (0xc000a20000) (3) Data frame handling\nI0308 13:31:09.105798 1105 log.go:172] (0xc000a20000) (3) Data frame sent\nI0308 13:31:09.105821 1105 log.go:172] (0xc000515c30) Data frame received for 3\nI0308 13:31:09.105836 1105 log.go:172] (0xc000a20000) (3) Data frame handling\nI0308 13:31:09.105875 1105 log.go:172] (0xc000700640) (5) Data frame sent\nI0308 13:31:09.105945 1105 log.go:172] (0xc000515c30) Data frame received for 5\nI0308 13:31:09.105966 1105 log.go:172] (0xc000700640) (5) Data frame handling\nI0308 13:31:09.107349 1105 log.go:172] (0xc000515c30) Data frame received for 1\nI0308 13:31:09.107377 1105 log.go:172] (0xc000a20640) (1) Data frame handling\nI0308 13:31:09.107394 1105 log.go:172] (0xc000a20640) (1) Data frame sent\nI0308 13:31:09.107413 1105 log.go:172] (0xc000515c30) (0xc000a20640) Stream removed, broadcasting: 1\nI0308 13:31:09.107427 1105 log.go:172] (0xc000515c30) Go away received\nI0308 13:31:09.107771 1105 log.go:172] (0xc000515c30) (0xc000a20640) Stream removed, broadcasting: 1\nI0308 13:31:09.107794 1105 log.go:172] (0xc000515c30) (0xc000a20000) Stream removed, broadcasting: 3\nI0308 13:31:09.107809 1105 log.go:172] (0xc000515c30) (0xc000700640) Stream removed, broadcasting: 5\n" Mar 8 13:31:09.110: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 13:31:09.110: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 13:31:29.129: INFO: Deleting all statefulset in ns statefulset-9125 Mar 8 13:31:29.132: INFO: Scaling statefulset ss2 to 0 Mar 8 13:31:59.148: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 13:31:59.151: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:31:59.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9125" for this suite. • [SLOW TEST:142.927 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":67,"skipped":1076,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:31:59.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 8 13:31:59.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6349 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 8 13:32:01.091: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0308 13:32:01.035924 1124 log.go:172] (0xc000b6a2c0) (0xc000b8a1e0) Create stream\nI0308 13:32:01.035985 1124 log.go:172] (0xc000b6a2c0) (0xc000b8a1e0) Stream added, broadcasting: 1\nI0308 13:32:01.040328 1124 log.go:172] (0xc000b6a2c0) Reply frame received for 1\nI0308 13:32:01.040482 1124 log.go:172] (0xc000b6a2c0) (0xc0002c0000) Create stream\nI0308 13:32:01.040550 1124 log.go:172] (0xc000b6a2c0) (0xc0002c0000) Stream added, broadcasting: 3\nI0308 13:32:01.042669 1124 log.go:172] (0xc000b6a2c0) Reply frame received for 3\nI0308 13:32:01.042728 1124 log.go:172] (0xc000b6a2c0) (0xc00067f900) Create stream\nI0308 13:32:01.042743 1124 log.go:172] (0xc000b6a2c0) (0xc00067f900) Stream added, broadcasting: 5\nI0308 13:32:01.044896 1124 log.go:172] (0xc000b6a2c0) Reply frame received for 5\nI0308 13:32:01.044934 1124 log.go:172] (0xc000b6a2c0) (0xc000b8a320) Create stream\nI0308 13:32:01.044944 1124 log.go:172] (0xc000b6a2c0) (0xc000b8a320) Stream added, broadcasting: 7\nI0308 13:32:01.045816 1124 log.go:172] (0xc000b6a2c0) Reply frame received for 7\nI0308 13:32:01.045923 1124 log.go:172] (0xc0002c0000) (3) Writing data frame\nI0308 13:32:01.046031 1124 log.go:172] (0xc0002c0000) (3) Writing data frame\nI0308 13:32:01.046974 1124 log.go:172] (0xc000b6a2c0) Data frame received for 5\nI0308 13:32:01.047005 1124 log.go:172] (0xc00067f900) (5) Data frame handling\nI0308 13:32:01.047030 1124 log.go:172] (0xc00067f900) (5) Data frame sent\nI0308 13:32:01.047590 1124 log.go:172] (0xc000b6a2c0) Data frame received for 5\nI0308 13:32:01.047615 1124 log.go:172] (0xc00067f900) (5) Data frame handling\nI0308 13:32:01.047633 1124 log.go:172] (0xc00067f900) (5) Data frame sent\nI0308 13:32:01.063484 1124 log.go:172] (0xc000b6a2c0) Data frame received for 7\nI0308 13:32:01.063504 1124 log.go:172] (0xc000b8a320) (7) Data frame handling\nI0308 13:32:01.064005 1124 log.go:172] (0xc000b6a2c0) Data frame received for 5\nI0308 13:32:01.064035 1124 log.go:172] (0xc00067f900) (5) Data frame handling\nI0308 13:32:01.064896 1124 log.go:172] (0xc000b6a2c0) Data frame received for 1\nI0308 13:32:01.064924 1124 log.go:172] (0xc000b8a1e0) (1) Data frame handling\nI0308 13:32:01.064942 1124 log.go:172] (0xc000b8a1e0) (1) Data frame sent\nI0308 13:32:01.065159 1124 log.go:172] (0xc000b6a2c0) (0xc000b8a1e0) Stream removed, broadcasting: 1\nI0308 13:32:01.065251 1124 log.go:172] (0xc000b6a2c0) (0xc0002c0000) Stream removed, broadcasting: 3\nI0308 13:32:01.065294 1124 log.go:172] (0xc000b6a2c0) Go away received\nI0308 13:32:01.065455 1124 log.go:172] (0xc000b6a2c0) (0xc000b8a1e0) Stream removed, broadcasting: 1\nI0308 13:32:01.065483 1124 log.go:172] (0xc000b6a2c0) (0xc0002c0000) Stream removed, broadcasting: 3\nI0308 13:32:01.065498 1124 log.go:172] (0xc000b6a2c0) (0xc00067f900) Stream removed, broadcasting: 5\nI0308 13:32:01.065518 1124 log.go:172] (0xc000b6a2c0) (0xc000b8a320) Stream removed, broadcasting: 7\n" Mar 8 13:32:01.091: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:32:03.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6349" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":68,"skipped":1097,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:32:03.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 13:32:03.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3269' Mar 8 13:32:03.309: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 13:32:03.309: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Mar 8 13:32:05.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3269' Mar 8 13:32:05.487: INFO: stderr: "" Mar 8 13:32:05.487: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:32:05.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3269" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":69,"skipped":1112,"failed":0} SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:32:05.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 8 13:32:11.633: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:11.633: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:11.672245 6 log.go:172] (0xc002c244d0) (0xc0004fb180) Create stream I0308 13:32:11.672273 6 log.go:172] (0xc002c244d0) (0xc0004fb180) Stream added, broadcasting: 1 I0308 13:32:11.674266 6 log.go:172] (0xc002c244d0) Reply frame received for 1 I0308 13:32:11.674306 6 log.go:172] (0xc002c244d0) (0xc00113a3c0) Create stream I0308 13:32:11.674321 6 log.go:172] (0xc002c244d0) (0xc00113a3c0) Stream added, broadcasting: 3 I0308 13:32:11.675335 6 log.go:172] (0xc002c244d0) Reply frame received for 3 I0308 13:32:11.675377 6 log.go:172] (0xc002c244d0) (0xc000dab540) Create stream I0308 13:32:11.675396 6 log.go:172] (0xc002c244d0) (0xc000dab540) Stream added, broadcasting: 5 I0308 13:32:11.676582 6 log.go:172] (0xc002c244d0) Reply frame received for 5 I0308 13:32:11.739845 6 log.go:172] (0xc002c244d0) Data frame received for 3 I0308 13:32:11.739878 6 log.go:172] (0xc00113a3c0) (3) Data frame handling I0308 13:32:11.739906 6 log.go:172] (0xc00113a3c0) (3) Data frame sent I0308 13:32:11.739923 6 log.go:172] (0xc002c244d0) Data frame received for 3 I0308 13:32:11.739937 6 log.go:172] (0xc00113a3c0) (3) Data frame handling I0308 13:32:11.740246 6 log.go:172] (0xc002c244d0) Data frame received for 5 I0308 13:32:11.740267 6 log.go:172] (0xc000dab540) (5) Data frame handling I0308 13:32:11.742995 6 log.go:172] (0xc002c244d0) Data frame received for 1 I0308 13:32:11.743016 6 log.go:172] (0xc0004fb180) (1) Data frame handling I0308 13:32:11.743035 6 log.go:172] (0xc0004fb180) (1) Data frame sent I0308 13:32:11.743138 6 log.go:172] (0xc002c244d0) (0xc0004fb180) Stream removed, broadcasting: 1 I0308 13:32:11.743231 6 log.go:172] (0xc002c244d0) (0xc0004fb180) Stream removed, broadcasting: 1 I0308 13:32:11.743250 6 log.go:172] (0xc002c244d0) (0xc00113a3c0) Stream removed, broadcasting: 3 I0308 13:32:11.743265 6 log.go:172] (0xc002c244d0) (0xc000dab540) Stream removed, broadcasting: 5 I0308 13:32:11.743284 6 log.go:172] (0xc002c244d0) Go away received Mar 8 13:32:11.743: INFO: Exec stderr: "" Mar 8 13:32:11.743: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:11.743: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:11.777585 6 log.go:172] (0xc002c24b00) (0xc00040b540) Create stream I0308 13:32:11.777623 6 log.go:172] (0xc002c24b00) (0xc00040b540) Stream added, broadcasting: 1 I0308 13:32:11.779259 6 log.go:172] (0xc002c24b00) Reply frame received for 1 I0308 13:32:11.779298 6 log.go:172] (0xc002c24b00) (0xc000dab680) Create stream I0308 13:32:11.779311 6 log.go:172] (0xc002c24b00) (0xc000dab680) Stream added, broadcasting: 3 I0308 13:32:11.780226 6 log.go:172] (0xc002c24b00) Reply frame received for 3 I0308 13:32:11.780257 6 log.go:172] (0xc002c24b00) (0xc00113a640) Create stream I0308 13:32:11.780270 6 log.go:172] (0xc002c24b00) (0xc00113a640) Stream added, broadcasting: 5 I0308 13:32:11.781212 6 log.go:172] (0xc002c24b00) Reply frame received for 5 I0308 13:32:11.841791 6 log.go:172] (0xc002c24b00) Data frame received for 5 I0308 13:32:11.841817 6 log.go:172] (0xc00113a640) (5) Data frame handling I0308 13:32:11.841837 6 log.go:172] (0xc002c24b00) Data frame received for 3 I0308 13:32:11.841854 6 log.go:172] (0xc000dab680) (3) Data frame handling I0308 13:32:11.841875 6 log.go:172] (0xc000dab680) (3) Data frame sent I0308 13:32:11.841891 6 log.go:172] (0xc002c24b00) Data frame received for 3 I0308 13:32:11.841905 6 log.go:172] (0xc000dab680) (3) Data frame handling I0308 13:32:11.843372 6 log.go:172] (0xc002c24b00) Data frame received for 1 I0308 13:32:11.843393 6 log.go:172] (0xc00040b540) (1) Data frame handling I0308 13:32:11.843412 6 log.go:172] (0xc00040b540) (1) Data frame sent I0308 13:32:11.843523 6 log.go:172] (0xc002c24b00) (0xc00040b540) Stream removed, broadcasting: 1 I0308 13:32:11.843655 6 log.go:172] (0xc002c24b00) Go away received I0308 13:32:11.843734 6 log.go:172] (0xc002c24b00) (0xc00040b540) Stream removed, broadcasting: 1 I0308 13:32:11.843757 6 log.go:172] (0xc002c24b00) (0xc000dab680) Stream removed, broadcasting: 3 I0308 13:32:11.843770 6 log.go:172] (0xc002c24b00) (0xc00113a640) Stream removed, broadcasting: 5 Mar 8 13:32:11.843: INFO: Exec stderr: "" Mar 8 13:32:11.843: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:11.843: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:11.875788 6 log.go:172] (0xc0024824d0) (0xc0003337c0) Create stream I0308 13:32:11.875810 6 log.go:172] (0xc0024824d0) (0xc0003337c0) Stream added, broadcasting: 1 I0308 13:32:11.877738 6 log.go:172] (0xc0024824d0) Reply frame received for 1 I0308 13:32:11.877772 6 log.go:172] (0xc0024824d0) (0xc000ba8fa0) Create stream I0308 13:32:11.877783 6 log.go:172] (0xc0024824d0) (0xc000ba8fa0) Stream added, broadcasting: 3 I0308 13:32:11.878752 6 log.go:172] (0xc0024824d0) Reply frame received for 3 I0308 13:32:11.878796 6 log.go:172] (0xc0024824d0) (0xc000259540) Create stream I0308 13:32:11.878816 6 log.go:172] (0xc0024824d0) (0xc000259540) Stream added, broadcasting: 5 I0308 13:32:11.880046 6 log.go:172] (0xc0024824d0) Reply frame received for 5 I0308 13:32:11.942181 6 log.go:172] (0xc0024824d0) Data frame received for 3 I0308 13:32:11.942209 6 log.go:172] (0xc000ba8fa0) (3) Data frame handling I0308 13:32:11.942224 6 log.go:172] (0xc000ba8fa0) (3) Data frame sent I0308 13:32:11.942233 6 log.go:172] (0xc0024824d0) Data frame received for 3 I0308 13:32:11.942241 6 log.go:172] (0xc000ba8fa0) (3) Data frame handling I0308 13:32:11.942270 6 log.go:172] (0xc0024824d0) Data frame received for 5 I0308 13:32:11.942293 6 log.go:172] (0xc000259540) (5) Data frame handling I0308 13:32:11.943660 6 log.go:172] (0xc0024824d0) Data frame received for 1 I0308 13:32:11.943683 6 log.go:172] (0xc0003337c0) (1) Data frame handling I0308 13:32:11.943704 6 log.go:172] (0xc0003337c0) (1) Data frame sent I0308 13:32:11.943718 6 log.go:172] (0xc0024824d0) (0xc0003337c0) Stream removed, broadcasting: 1 I0308 13:32:11.943734 6 log.go:172] (0xc0024824d0) Go away received I0308 13:32:11.943880 6 log.go:172] (0xc0024824d0) (0xc0003337c0) Stream removed, broadcasting: 1 I0308 13:32:11.943911 6 log.go:172] (0xc0024824d0) (0xc000ba8fa0) Stream removed, broadcasting: 3 I0308 13:32:11.943959 6 log.go:172] (0xc0024824d0) (0xc000259540) Stream removed, broadcasting: 5 Mar 8 13:32:11.943: INFO: Exec stderr: "" Mar 8 13:32:11.944: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:11.944: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:11.979125 6 log.go:172] (0xc002c78580) (0xc001196320) Create stream I0308 13:32:11.979151 6 log.go:172] (0xc002c78580) (0xc001196320) Stream added, broadcasting: 1 I0308 13:32:11.982013 6 log.go:172] (0xc002c78580) Reply frame received for 1 I0308 13:32:11.982051 6 log.go:172] (0xc002c78580) (0xc0013920a0) Create stream I0308 13:32:11.982065 6 log.go:172] (0xc002c78580) (0xc0013920a0) Stream added, broadcasting: 3 I0308 13:32:11.983259 6 log.go:172] (0xc002c78580) Reply frame received for 3 I0308 13:32:11.983300 6 log.go:172] (0xc002c78580) (0xc00113a820) Create stream I0308 13:32:11.983316 6 log.go:172] (0xc002c78580) (0xc00113a820) Stream added, broadcasting: 5 I0308 13:32:11.984399 6 log.go:172] (0xc002c78580) Reply frame received for 5 I0308 13:32:12.037940 6 log.go:172] (0xc002c78580) Data frame received for 5 I0308 13:32:12.037962 6 log.go:172] (0xc00113a820) (5) Data frame handling I0308 13:32:12.037990 6 log.go:172] (0xc002c78580) Data frame received for 3 I0308 13:32:12.038005 6 log.go:172] (0xc0013920a0) (3) Data frame handling I0308 13:32:12.038023 6 log.go:172] (0xc0013920a0) (3) Data frame sent I0308 13:32:12.038034 6 log.go:172] (0xc002c78580) Data frame received for 3 I0308 13:32:12.038044 6 log.go:172] (0xc0013920a0) (3) Data frame handling I0308 13:32:12.040093 6 log.go:172] (0xc002c78580) Data frame received for 1 I0308 13:32:12.040121 6 log.go:172] (0xc001196320) (1) Data frame handling I0308 13:32:12.040143 6 log.go:172] (0xc001196320) (1) Data frame sent I0308 13:32:12.040157 6 log.go:172] (0xc002c78580) (0xc001196320) Stream removed, broadcasting: 1 I0308 13:32:12.040209 6 log.go:172] (0xc002c78580) Go away received I0308 13:32:12.040333 6 log.go:172] (0xc002c78580) (0xc001196320) Stream removed, broadcasting: 1 I0308 13:32:12.040366 6 log.go:172] (0xc002c78580) (0xc0013920a0) Stream removed, broadcasting: 3 I0308 13:32:12.040377 6 log.go:172] (0xc002c78580) (0xc00113a820) Stream removed, broadcasting: 5 Mar 8 13:32:12.040: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 8 13:32:12.040: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:12.040: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:12.076958 6 log.go:172] (0xc002c251e0) (0xc001392aa0) Create stream I0308 13:32:12.076995 6 log.go:172] (0xc002c251e0) (0xc001392aa0) Stream added, broadcasting: 1 I0308 13:32:12.080265 6 log.go:172] (0xc002c251e0) Reply frame received for 1 I0308 13:32:12.080302 6 log.go:172] (0xc002c251e0) (0xc0016c8000) Create stream I0308 13:32:12.080314 6 log.go:172] (0xc002c251e0) (0xc0016c8000) Stream added, broadcasting: 3 I0308 13:32:12.081064 6 log.go:172] (0xc002c251e0) Reply frame received for 3 I0308 13:32:12.081092 6 log.go:172] (0xc002c251e0) (0xc00113a960) Create stream I0308 13:32:12.081104 6 log.go:172] (0xc002c251e0) (0xc00113a960) Stream added, broadcasting: 5 I0308 13:32:12.081938 6 log.go:172] (0xc002c251e0) Reply frame received for 5 I0308 13:32:12.132637 6 log.go:172] (0xc002c251e0) Data frame received for 5 I0308 13:32:12.132668 6 log.go:172] (0xc00113a960) (5) Data frame handling I0308 13:32:12.132688 6 log.go:172] (0xc002c251e0) Data frame received for 3 I0308 13:32:12.132700 6 log.go:172] (0xc0016c8000) (3) Data frame handling I0308 13:32:12.132709 6 log.go:172] (0xc0016c8000) (3) Data frame sent I0308 13:32:12.132717 6 log.go:172] (0xc002c251e0) Data frame received for 3 I0308 13:32:12.132723 6 log.go:172] (0xc0016c8000) (3) Data frame handling I0308 13:32:12.133834 6 log.go:172] (0xc002c251e0) Data frame received for 1 I0308 13:32:12.133853 6 log.go:172] (0xc001392aa0) (1) Data frame handling I0308 13:32:12.133867 6 log.go:172] (0xc001392aa0) (1) Data frame sent I0308 13:32:12.133901 6 log.go:172] (0xc002c251e0) (0xc001392aa0) Stream removed, broadcasting: 1 I0308 13:32:12.133925 6 log.go:172] (0xc002c251e0) Go away received I0308 13:32:12.134038 6 log.go:172] (0xc002c251e0) (0xc001392aa0) Stream removed, broadcasting: 1 I0308 13:32:12.134071 6 log.go:172] (0xc002c251e0) (0xc0016c8000) Stream removed, broadcasting: 3 I0308 13:32:12.134093 6 log.go:172] (0xc002c251e0) (0xc00113a960) Stream removed, broadcasting: 5 Mar 8 13:32:12.134: INFO: Exec stderr: "" Mar 8 13:32:12.134: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:12.134: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:12.165504 6 log.go:172] (0xc0033e4160) (0xc000ba9d60) Create stream I0308 13:32:12.165525 6 log.go:172] (0xc0033e4160) (0xc000ba9d60) Stream added, broadcasting: 1 I0308 13:32:12.167758 6 log.go:172] (0xc0033e4160) Reply frame received for 1 I0308 13:32:12.167800 6 log.go:172] (0xc0033e4160) (0xc000ba9e00) Create stream I0308 13:32:12.167809 6 log.go:172] (0xc0033e4160) (0xc000ba9e00) Stream added, broadcasting: 3 I0308 13:32:12.168757 6 log.go:172] (0xc0033e4160) Reply frame received for 3 I0308 13:32:12.168803 6 log.go:172] (0xc0033e4160) (0xc0011963c0) Create stream I0308 13:32:12.168822 6 log.go:172] (0xc0033e4160) (0xc0011963c0) Stream added, broadcasting: 5 I0308 13:32:12.169924 6 log.go:172] (0xc0033e4160) Reply frame received for 5 I0308 13:32:12.227632 6 log.go:172] (0xc0033e4160) Data frame received for 3 I0308 13:32:12.227661 6 log.go:172] (0xc000ba9e00) (3) Data frame handling I0308 13:32:12.227673 6 log.go:172] (0xc000ba9e00) (3) Data frame sent I0308 13:32:12.227682 6 log.go:172] (0xc0033e4160) Data frame received for 3 I0308 13:32:12.227690 6 log.go:172] (0xc000ba9e00) (3) Data frame handling I0308 13:32:12.227736 6 log.go:172] (0xc0033e4160) Data frame received for 5 I0308 13:32:12.227759 6 log.go:172] (0xc0011963c0) (5) Data frame handling I0308 13:32:12.228592 6 log.go:172] (0xc0033e4160) Data frame received for 1 I0308 13:32:12.228611 6 log.go:172] (0xc000ba9d60) (1) Data frame handling I0308 13:32:12.228623 6 log.go:172] (0xc000ba9d60) (1) Data frame sent I0308 13:32:12.228637 6 log.go:172] (0xc0033e4160) (0xc000ba9d60) Stream removed, broadcasting: 1 I0308 13:32:12.228673 6 log.go:172] (0xc0033e4160) Go away received I0308 13:32:12.228711 6 log.go:172] (0xc0033e4160) (0xc000ba9d60) Stream removed, broadcasting: 1 I0308 13:32:12.228727 6 log.go:172] (0xc0033e4160) (0xc000ba9e00) Stream removed, broadcasting: 3 I0308 13:32:12.228736 6 log.go:172] (0xc0033e4160) (0xc0011963c0) Stream removed, broadcasting: 5 Mar 8 13:32:12.228: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 8 13:32:12.228: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:12.228: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:12.249966 6 log.go:172] (0xc0033e4790) (0xc001ad00a0) Create stream I0308 13:32:12.249985 6 log.go:172] (0xc0033e4790) (0xc001ad00a0) Stream added, broadcasting: 1 I0308 13:32:12.254945 6 log.go:172] (0xc0033e4790) Reply frame received for 1 I0308 13:32:12.254991 6 log.go:172] (0xc0033e4790) (0xc0016c81e0) Create stream I0308 13:32:12.255005 6 log.go:172] (0xc0033e4790) (0xc0016c81e0) Stream added, broadcasting: 3 I0308 13:32:12.256136 6 log.go:172] (0xc0033e4790) Reply frame received for 3 I0308 13:32:12.256164 6 log.go:172] (0xc0033e4790) (0xc001196460) Create stream I0308 13:32:12.256176 6 log.go:172] (0xc0033e4790) (0xc001196460) Stream added, broadcasting: 5 I0308 13:32:12.257203 6 log.go:172] (0xc0033e4790) Reply frame received for 5 I0308 13:32:12.323930 6 log.go:172] (0xc0033e4790) Data frame received for 3 I0308 13:32:12.323962 6 log.go:172] (0xc0016c81e0) (3) Data frame handling I0308 13:32:12.323979 6 log.go:172] (0xc0016c81e0) (3) Data frame sent I0308 13:32:12.323988 6 log.go:172] (0xc0033e4790) Data frame received for 3 I0308 13:32:12.323998 6 log.go:172] (0xc0016c81e0) (3) Data frame handling I0308 13:32:12.324017 6 log.go:172] (0xc0033e4790) Data frame received for 5 I0308 13:32:12.324032 6 log.go:172] (0xc001196460) (5) Data frame handling I0308 13:32:12.325328 6 log.go:172] (0xc0033e4790) Data frame received for 1 I0308 13:32:12.325345 6 log.go:172] (0xc001ad00a0) (1) Data frame handling I0308 13:32:12.325365 6 log.go:172] (0xc001ad00a0) (1) Data frame sent I0308 13:32:12.325462 6 log.go:172] (0xc0033e4790) (0xc001ad00a0) Stream removed, broadcasting: 1 I0308 13:32:12.325498 6 log.go:172] (0xc0033e4790) Go away received I0308 13:32:12.325579 6 log.go:172] (0xc0033e4790) (0xc001ad00a0) Stream removed, broadcasting: 1 I0308 13:32:12.325604 6 log.go:172] (0xc0033e4790) (0xc0016c81e0) Stream removed, broadcasting: 3 I0308 13:32:12.325624 6 log.go:172] (0xc0033e4790) (0xc001196460) Stream removed, broadcasting: 5 Mar 8 13:32:12.325: INFO: Exec stderr: "" Mar 8 13:32:12.325: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:12.325: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:12.356360 6 log.go:172] (0xc002c78d10) (0xc001196c80) Create stream I0308 13:32:12.356400 6 log.go:172] (0xc002c78d10) (0xc001196c80) Stream added, broadcasting: 1 I0308 13:32:12.358508 6 log.go:172] (0xc002c78d10) Reply frame received for 1 I0308 13:32:12.358536 6 log.go:172] (0xc002c78d10) (0xc001ad01e0) Create stream I0308 13:32:12.358546 6 log.go:172] (0xc002c78d10) (0xc001ad01e0) Stream added, broadcasting: 3 I0308 13:32:12.359417 6 log.go:172] (0xc002c78d10) Reply frame received for 3 I0308 13:32:12.359442 6 log.go:172] (0xc002c78d10) (0xc00113ad20) Create stream I0308 13:32:12.359450 6 log.go:172] (0xc002c78d10) (0xc00113ad20) Stream added, broadcasting: 5 I0308 13:32:12.360210 6 log.go:172] (0xc002c78d10) Reply frame received for 5 I0308 13:32:12.408639 6 log.go:172] (0xc002c78d10) Data frame received for 3 I0308 13:32:12.408662 6 log.go:172] (0xc001ad01e0) (3) Data frame handling I0308 13:32:12.408673 6 log.go:172] (0xc001ad01e0) (3) Data frame sent I0308 13:32:12.408684 6 log.go:172] (0xc002c78d10) Data frame received for 3 I0308 13:32:12.408695 6 log.go:172] (0xc001ad01e0) (3) Data frame handling I0308 13:32:12.408785 6 log.go:172] (0xc002c78d10) Data frame received for 5 I0308 13:32:12.408799 6 log.go:172] (0xc00113ad20) (5) Data frame handling I0308 13:32:12.410296 6 log.go:172] (0xc002c78d10) Data frame received for 1 I0308 13:32:12.410320 6 log.go:172] (0xc001196c80) (1) Data frame handling I0308 13:32:12.410340 6 log.go:172] (0xc001196c80) (1) Data frame sent I0308 13:32:12.410357 6 log.go:172] (0xc002c78d10) (0xc001196c80) Stream removed, broadcasting: 1 I0308 13:32:12.410374 6 log.go:172] (0xc002c78d10) Go away received I0308 13:32:12.410480 6 log.go:172] (0xc002c78d10) (0xc001196c80) Stream removed, broadcasting: 1 I0308 13:32:12.410498 6 log.go:172] (0xc002c78d10) (0xc001ad01e0) Stream removed, broadcasting: 3 I0308 13:32:12.410508 6 log.go:172] (0xc002c78d10) (0xc00113ad20) Stream removed, broadcasting: 5 Mar 8 13:32:12.410: INFO: Exec stderr: "" Mar 8 13:32:12.410: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:12.410: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:12.438308 6 log.go:172] (0xc002c79340) (0xc001197680) Create stream I0308 13:32:12.438330 6 log.go:172] (0xc002c79340) (0xc001197680) Stream added, broadcasting: 1 I0308 13:32:12.440266 6 log.go:172] (0xc002c79340) Reply frame received for 1 I0308 13:32:12.440311 6 log.go:172] (0xc002c79340) (0xc001ad0280) Create stream I0308 13:32:12.440324 6 log.go:172] (0xc002c79340) (0xc001ad0280) Stream added, broadcasting: 3 I0308 13:32:12.441414 6 log.go:172] (0xc002c79340) Reply frame received for 3 I0308 13:32:12.441440 6 log.go:172] (0xc002c79340) (0xc001ad0320) Create stream I0308 13:32:12.441451 6 log.go:172] (0xc002c79340) (0xc001ad0320) Stream added, broadcasting: 5 I0308 13:32:12.442496 6 log.go:172] (0xc002c79340) Reply frame received for 5 I0308 13:32:12.517567 6 log.go:172] (0xc002c79340) Data frame received for 5 I0308 13:32:12.517594 6 log.go:172] (0xc001ad0320) (5) Data frame handling I0308 13:32:12.517633 6 log.go:172] (0xc002c79340) Data frame received for 3 I0308 13:32:12.517659 6 log.go:172] (0xc001ad0280) (3) Data frame handling I0308 13:32:12.517680 6 log.go:172] (0xc001ad0280) (3) Data frame sent I0308 13:32:12.517688 6 log.go:172] (0xc002c79340) Data frame received for 3 I0308 13:32:12.517698 6 log.go:172] (0xc001ad0280) (3) Data frame handling I0308 13:32:12.519055 6 log.go:172] (0xc002c79340) Data frame received for 1 I0308 13:32:12.519089 6 log.go:172] (0xc001197680) (1) Data frame handling I0308 13:32:12.519125 6 log.go:172] (0xc001197680) (1) Data frame sent I0308 13:32:12.519265 6 log.go:172] (0xc002c79340) (0xc001197680) Stream removed, broadcasting: 1 I0308 13:32:12.519327 6 log.go:172] (0xc002c79340) Go away received I0308 13:32:12.519485 6 log.go:172] (0xc002c79340) (0xc001197680) Stream removed, broadcasting: 1 I0308 13:32:12.519514 6 log.go:172] (0xc002c79340) (0xc001ad0280) Stream removed, broadcasting: 3 I0308 13:32:12.519527 6 log.go:172] (0xc002c79340) (0xc001ad0320) Stream removed, broadcasting: 5 Mar 8 13:32:12.519: INFO: Exec stderr: "" Mar 8 13:32:12.519: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4438 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:32:12.519: INFO: >>> kubeConfig: /root/.kube/config I0308 13:32:12.548936 6 log.go:172] (0xc002482bb0) (0xc0016c85a0) Create stream I0308 13:32:12.548957 6 log.go:172] (0xc002482bb0) (0xc0016c85a0) Stream added, broadcasting: 1 I0308 13:32:12.551154 6 log.go:172] (0xc002482bb0) Reply frame received for 1 I0308 13:32:12.551189 6 log.go:172] (0xc002482bb0) (0xc001ad0460) Create stream I0308 13:32:12.551199 6 log.go:172] (0xc002482bb0) (0xc001ad0460) Stream added, broadcasting: 3 I0308 13:32:12.551954 6 log.go:172] (0xc002482bb0) Reply frame received for 3 I0308 13:32:12.551979 6 log.go:172] (0xc002482bb0) (0xc00113af00) Create stream I0308 13:32:12.551989 6 log.go:172] (0xc002482bb0) (0xc00113af00) Stream added, broadcasting: 5 I0308 13:32:12.552745 6 log.go:172] (0xc002482bb0) Reply frame received for 5 I0308 13:32:12.625435 6 log.go:172] (0xc002482bb0) Data frame received for 5 I0308 13:32:12.625459 6 log.go:172] (0xc00113af00) (5) Data frame handling I0308 13:32:12.625476 6 log.go:172] (0xc002482bb0) Data frame received for 3 I0308 13:32:12.625489 6 log.go:172] (0xc001ad0460) (3) Data frame handling I0308 13:32:12.625505 6 log.go:172] (0xc001ad0460) (3) Data frame sent I0308 13:32:12.625517 6 log.go:172] (0xc002482bb0) Data frame received for 3 I0308 13:32:12.625527 6 log.go:172] (0xc001ad0460) (3) Data frame handling I0308 13:32:12.626762 6 log.go:172] (0xc002482bb0) Data frame received for 1 I0308 13:32:12.626782 6 log.go:172] (0xc0016c85a0) (1) Data frame handling I0308 13:32:12.626798 6 log.go:172] (0xc0016c85a0) (1) Data frame sent I0308 13:32:12.626811 6 log.go:172] (0xc002482bb0) (0xc0016c85a0) Stream removed, broadcasting: 1 I0308 13:32:12.626889 6 log.go:172] (0xc002482bb0) (0xc0016c85a0) Stream removed, broadcasting: 1 I0308 13:32:12.626902 6 log.go:172] (0xc002482bb0) (0xc001ad0460) Stream removed, broadcasting: 3 I0308 13:32:12.626984 6 log.go:172] (0xc002482bb0) Go away received I0308 13:32:12.627086 6 log.go:172] (0xc002482bb0) (0xc00113af00) Stream removed, broadcasting: 5 Mar 8 13:32:12.627: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:32:12.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4438" for this suite. • [SLOW TEST:7.129 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1117,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:32:12.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-5e026621-307c-497d-ae90-36c125e896c2 in namespace container-probe-7421 Mar 8 13:32:14.742: INFO: Started pod test-webserver-5e026621-307c-497d-ae90-36c125e896c2 in namespace container-probe-7421 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 13:32:14.744: INFO: Initial restart count of pod test-webserver-5e026621-307c-497d-ae90-36c125e896c2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:15.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7421" for this suite. • [SLOW TEST:242.915 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1120,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:15.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:36:15.627: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b42c4bf-b56c-463a-ad10-507f1b61cb64" in namespace "projected-8204" to be "success or failure" Mar 8 13:36:15.668: INFO: Pod "downwardapi-volume-1b42c4bf-b56c-463a-ad10-507f1b61cb64": Phase="Pending", Reason="", readiness=false. Elapsed: 40.375151ms Mar 8 13:36:17.672: INFO: Pod "downwardapi-volume-1b42c4bf-b56c-463a-ad10-507f1b61cb64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044646625s STEP: Saw pod success Mar 8 13:36:17.672: INFO: Pod "downwardapi-volume-1b42c4bf-b56c-463a-ad10-507f1b61cb64" satisfied condition "success or failure" Mar 8 13:36:17.676: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-1b42c4bf-b56c-463a-ad10-507f1b61cb64 container client-container: STEP: delete the pod Mar 8 13:36:17.722: INFO: Waiting for pod downwardapi-volume-1b42c4bf-b56c-463a-ad10-507f1b61cb64 to disappear Mar 8 13:36:17.727: INFO: Pod downwardapi-volume-1b42c4bf-b56c-463a-ad10-507f1b61cb64 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:17.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8204" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1125,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:17.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 13:36:17.807: INFO: Waiting up to 5m0s for pod "pod-c9267369-4227-47e3-8ba6-eb03e67679bd" in namespace "emptydir-9732" to be "success or failure" Mar 8 13:36:17.811: INFO: Pod "pod-c9267369-4227-47e3-8ba6-eb03e67679bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108101ms Mar 8 13:36:19.814: INFO: Pod "pod-c9267369-4227-47e3-8ba6-eb03e67679bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007528367s Mar 8 13:36:21.818: INFO: Pod "pod-c9267369-4227-47e3-8ba6-eb03e67679bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011367807s STEP: Saw pod success Mar 8 13:36:21.818: INFO: Pod "pod-c9267369-4227-47e3-8ba6-eb03e67679bd" satisfied condition "success or failure" Mar 8 13:36:21.821: INFO: Trying to get logs from node kind-worker pod pod-c9267369-4227-47e3-8ba6-eb03e67679bd container test-container: STEP: delete the pod Mar 8 13:36:21.858: INFO: Waiting for pod pod-c9267369-4227-47e3-8ba6-eb03e67679bd to disappear Mar 8 13:36:21.862: INFO: Pod pod-c9267369-4227-47e3-8ba6-eb03e67679bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:21.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9732" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:21.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ab48a901-b254-45b7-b7f4-e286f5013c72 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ab48a901-b254-45b7-b7f4-e286f5013c72 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:25.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1702" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1188,"failed":0} ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:26.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 13:36:26.057: INFO: Waiting up to 5m0s for pod "downward-api-6b881dae-0575-4a9b-bebd-9c4496a9fa9a" in namespace "downward-api-8356" to be "success or failure" Mar 8 13:36:26.060: INFO: Pod "downward-api-6b881dae-0575-4a9b-bebd-9c4496a9fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342243ms Mar 8 13:36:28.064: INFO: Pod "downward-api-6b881dae-0575-4a9b-bebd-9c4496a9fa9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006328916s STEP: Saw pod success Mar 8 13:36:28.064: INFO: Pod "downward-api-6b881dae-0575-4a9b-bebd-9c4496a9fa9a" satisfied condition "success or failure" Mar 8 13:36:28.067: INFO: Trying to get logs from node kind-worker pod downward-api-6b881dae-0575-4a9b-bebd-9c4496a9fa9a container dapi-container: STEP: delete the pod Mar 8 13:36:28.103: INFO: Waiting for pod downward-api-6b881dae-0575-4a9b-bebd-9c4496a9fa9a to disappear Mar 8 13:36:28.110: INFO: Pod downward-api-6b881dae-0575-4a9b-bebd-9c4496a9fa9a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:28.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8356" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1188,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:28.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:36:28.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 8 13:36:28.300: INFO: stderr: "" Mar 8 13:36:28.300: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:28.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6930" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":76,"skipped":1195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:28.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:32.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-67" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1220,"failed":0} ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:32.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-sw95c in namespace proxy-4854 I0308 13:36:32.481295 6 runners.go:189] Created replication controller with name: proxy-service-sw95c, namespace: proxy-4854, replica count: 1 I0308 13:36:33.531700 6 runners.go:189] proxy-service-sw95c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 13:36:34.531907 6 runners.go:189] proxy-service-sw95c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 13:36:35.532134 6 runners.go:189] proxy-service-sw95c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 13:36:36.532335 6 runners.go:189] proxy-service-sw95c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 13:36:37.532502 6 runners.go:189] proxy-service-sw95c Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 13:36:37.534: INFO: setup took 5.106556892s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 8 13:36:37.540: INFO: (0) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 5.894149ms) Mar 8 13:36:37.541: INFO: (0) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 6.438861ms) Mar 8 13:36:37.541: INFO: (0) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 6.712984ms) Mar 8 13:36:37.542: INFO: (0) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 6.910824ms) Mar 8 13:36:37.547: INFO: (0) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 12.122873ms) Mar 8 13:36:37.547: INFO: (0) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 12.110873ms) Mar 8 13:36:37.547: INFO: (0) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 12.126075ms) Mar 8 13:36:37.547: INFO: (0) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 12.304828ms) Mar 8 13:36:37.547: INFO: (0) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 12.795473ms) Mar 8 13:36:37.549: INFO: (0) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 14.009045ms) Mar 8 13:36:37.549: INFO: (0) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 14.155122ms) Mar 8 13:36:37.551: INFO: (0) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: ... (200; 4.971403ms) Mar 8 13:36:37.564: INFO: (1) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 5.002814ms) Mar 8 13:36:37.564: INFO: (1) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 5.077258ms) Mar 8 13:36:37.564: INFO: (1) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 5.176429ms) Mar 8 13:36:37.564: INFO: (1) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 5.270804ms) Mar 8 13:36:37.564: INFO: (1) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 5.295343ms) Mar 8 13:36:37.564: INFO: (1) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 5.184981ms) Mar 8 13:36:37.570: INFO: (1) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 10.933107ms) Mar 8 13:36:37.570: INFO: (1) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test<... (200; 4.500024ms) Mar 8 13:36:37.574: INFO: (2) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 4.591277ms) Mar 8 13:36:37.575: INFO: (2) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 5.062544ms) Mar 8 13:36:37.575: INFO: (2) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 5.21433ms) Mar 8 13:36:37.575: INFO: (2) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 5.346837ms) Mar 8 13:36:37.575: INFO: (2) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 5.471047ms) Mar 8 13:36:37.575: INFO: (2) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 5.545972ms) Mar 8 13:36:37.575: INFO: (2) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 5.619251ms) Mar 8 13:36:37.576: INFO: (2) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 6.515199ms) Mar 8 13:36:37.576: INFO: (2) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 6.574789ms) Mar 8 13:36:37.577: INFO: (2) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 6.690822ms) Mar 8 13:36:37.577: INFO: (2) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 6.974809ms) Mar 8 13:36:37.577: INFO: (2) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 6.959453ms) Mar 8 13:36:37.577: INFO: (2) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 7.017356ms) Mar 8 13:36:37.577: INFO: (2) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test (200; 2.332878ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 4.104362ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 3.981967ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 4.017306ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 4.175174ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 4.251241ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 4.290562ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 4.262166ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 4.201801ms) Mar 8 13:36:37.581: INFO: (3) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 4.338557ms) Mar 8 13:36:37.582: INFO: (3) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test<... (200; 6.960298ms) Mar 8 13:36:37.603: INFO: (4) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 7.011231ms) Mar 8 13:36:37.603: INFO: (4) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 7.109557ms) Mar 8 13:36:37.603: INFO: (4) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 7.059298ms) Mar 8 13:36:37.603: INFO: (4) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 6.903502ms) Mar 8 13:36:37.603: INFO: (4) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 7.167256ms) Mar 8 13:36:37.603: INFO: (4) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: ... (200; 5.24636ms) Mar 8 13:36:37.611: INFO: (5) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 5.715402ms) Mar 8 13:36:37.611: INFO: (5) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 5.74685ms) Mar 8 13:36:37.611: INFO: (5) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 6.149581ms) Mar 8 13:36:37.611: INFO: (5) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test<... (200; 6.279438ms) Mar 8 13:36:37.611: INFO: (5) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 6.211552ms) Mar 8 13:36:37.612: INFO: (5) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 6.436146ms) Mar 8 13:36:37.612: INFO: (5) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 6.780987ms) Mar 8 13:36:37.612: INFO: (5) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 6.928525ms) Mar 8 13:36:37.612: INFO: (5) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 7.259594ms) Mar 8 13:36:37.612: INFO: (5) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 7.404771ms) Mar 8 13:36:37.613: INFO: (5) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 7.532999ms) Mar 8 13:36:37.613: INFO: (5) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 7.583984ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 7.049455ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 7.013085ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 7.057262ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 7.00315ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 6.995151ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 7.052231ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 7.114306ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 7.156419ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 7.05893ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test (200; 7.129646ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 7.195371ms) Mar 8 13:36:37.620: INFO: (6) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 7.204024ms) Mar 8 13:36:37.624: INFO: (7) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 3.735903ms) Mar 8 13:36:37.624: INFO: (7) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 4.107331ms) Mar 8 13:36:37.624: INFO: (7) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: ... (200; 5.92445ms) Mar 8 13:36:37.626: INFO: (7) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 6.068254ms) Mar 8 13:36:37.626: INFO: (7) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 5.996136ms) Mar 8 13:36:37.626: INFO: (7) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 5.959685ms) Mar 8 13:36:37.626: INFO: (7) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 5.960505ms) Mar 8 13:36:37.626: INFO: (7) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 5.947985ms) Mar 8 13:36:37.626: INFO: (7) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 6.261042ms) Mar 8 13:36:37.629: INFO: (8) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 2.315693ms) Mar 8 13:36:37.629: INFO: (8) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 2.684488ms) Mar 8 13:36:37.647: INFO: (8) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 20.700011ms) Mar 8 13:36:37.648: INFO: (8) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 20.97439ms) Mar 8 13:36:37.648: INFO: (8) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 21.210098ms) Mar 8 13:36:37.648: INFO: (8) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 21.321331ms) Mar 8 13:36:37.648: INFO: (8) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test (200; 21.715426ms) Mar 8 13:36:37.648: INFO: (8) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 21.739803ms) Mar 8 13:36:37.649: INFO: (8) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 22.374406ms) Mar 8 13:36:37.649: INFO: (8) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 22.924076ms) Mar 8 13:36:37.650: INFO: (8) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 23.405844ms) Mar 8 13:36:37.650: INFO: (8) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 23.708394ms) Mar 8 13:36:37.650: INFO: (8) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 23.68977ms) Mar 8 13:36:37.650: INFO: (8) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 23.939299ms) Mar 8 13:36:37.650: INFO: (8) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 23.937864ms) Mar 8 13:36:37.665: INFO: (9) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 14.404969ms) Mar 8 13:36:37.680: INFO: (9) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 29.018883ms) Mar 8 13:36:37.681: INFO: (9) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 30.834665ms) Mar 8 13:36:37.682: INFO: (9) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 31.001791ms) Mar 8 13:36:37.683: INFO: (9) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 32.109817ms) Mar 8 13:36:37.683: INFO: (9) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 32.415474ms) Mar 8 13:36:37.683: INFO: (9) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 32.390476ms) Mar 8 13:36:37.683: INFO: (9) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 32.436449ms) Mar 8 13:36:37.683: INFO: (9) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test (200; 6.032852ms) Mar 8 13:36:37.690: INFO: (10) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 6.205232ms) Mar 8 13:36:37.691: INFO: (10) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 6.644206ms) Mar 8 13:36:37.691: INFO: (10) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 6.817547ms) Mar 8 13:36:37.691: INFO: (10) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 6.820585ms) Mar 8 13:36:37.691: INFO: (10) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test<... (200; 7.222082ms) Mar 8 13:36:37.691: INFO: (10) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 7.187733ms) Mar 8 13:36:37.691: INFO: (10) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 7.201007ms) Mar 8 13:36:37.691: INFO: (10) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 7.2671ms) Mar 8 13:36:37.691: INFO: (10) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 7.166958ms) Mar 8 13:36:37.692: INFO: (10) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 7.356785ms) Mar 8 13:36:37.692: INFO: (10) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 7.493905ms) Mar 8 13:36:37.694: INFO: (11) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test<... (200; 3.535781ms) Mar 8 13:36:37.695: INFO: (11) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 3.551173ms) Mar 8 13:36:37.695: INFO: (11) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 3.554408ms) Mar 8 13:36:37.695: INFO: (11) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 3.61884ms) Mar 8 13:36:37.696: INFO: (11) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 4.489489ms) Mar 8 13:36:37.696: INFO: (11) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 4.711798ms) Mar 8 13:36:37.697: INFO: (11) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 5.020533ms) Mar 8 13:36:37.709: INFO: (11) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 17.550773ms) Mar 8 13:36:37.709: INFO: (11) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 17.523273ms) Mar 8 13:36:37.709: INFO: (11) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 17.575563ms) Mar 8 13:36:37.710: INFO: (11) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 17.794208ms) Mar 8 13:36:37.710: INFO: (11) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 18.377732ms) Mar 8 13:36:37.710: INFO: (11) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 18.565717ms) Mar 8 13:36:37.710: INFO: (11) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 18.641416ms) Mar 8 13:36:37.710: INFO: (11) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 18.638839ms) Mar 8 13:36:37.715: INFO: (12) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 4.641776ms) Mar 8 13:36:37.716: INFO: (12) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test (200; 5.50973ms) Mar 8 13:36:37.716: INFO: (12) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 5.562717ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 6.033001ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 6.088372ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 6.23159ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 6.212662ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 6.181542ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 6.184075ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 6.287725ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 6.533074ms) Mar 8 13:36:37.717: INFO: (12) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 6.4623ms) Mar 8 13:36:37.720: INFO: (13) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 3.196939ms) Mar 8 13:36:37.720: INFO: (13) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 3.228934ms) Mar 8 13:36:37.721: INFO: (13) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 3.564342ms) Mar 8 13:36:37.721: INFO: (13) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 3.534192ms) Mar 8 13:36:37.721: INFO: (13) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 3.546148ms) Mar 8 13:36:37.721: INFO: (13) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 3.773669ms) Mar 8 13:36:37.721: INFO: (13) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 3.761669ms) Mar 8 13:36:37.721: INFO: (13) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 3.903024ms) Mar 8 13:36:37.723: INFO: (13) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 5.410134ms) Mar 8 13:36:37.723: INFO: (13) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 5.433433ms) Mar 8 13:36:37.723: INFO: (13) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test (200; 5.410658ms) Mar 8 13:36:37.723: INFO: (13) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 5.439193ms) Mar 8 13:36:37.723: INFO: (13) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 5.385878ms) Mar 8 13:36:37.725: INFO: (14) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 2.259118ms) Mar 8 13:36:37.725: INFO: (14) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 2.550143ms) Mar 8 13:36:37.727: INFO: (14) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 4.133576ms) Mar 8 13:36:37.727: INFO: (14) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 4.580812ms) Mar 8 13:36:37.728: INFO: (14) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 4.846166ms) Mar 8 13:36:37.728: INFO: (14) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 4.896142ms) Mar 8 13:36:37.728: INFO: (14) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 4.941507ms) Mar 8 13:36:37.728: INFO: (14) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 5.282388ms) Mar 8 13:36:37.728: INFO: (14) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 5.279542ms) Mar 8 13:36:37.728: INFO: (14) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 5.655041ms) Mar 8 13:36:37.729: INFO: (14) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 5.826791ms) Mar 8 13:36:37.729: INFO: (14) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 5.803781ms) Mar 8 13:36:37.729: INFO: (14) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 5.86309ms) Mar 8 13:36:37.729: INFO: (14) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 5.876335ms) Mar 8 13:36:37.729: INFO: (14) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 6.033034ms) Mar 8 13:36:37.729: INFO: (14) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test (200; 3.593231ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 4.730105ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 4.661242ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: ... (200; 4.724967ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 4.788912ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 4.828915ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 4.81777ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 4.863523ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 4.875661ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 4.933135ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 4.857361ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 5.00865ms) Mar 8 13:36:37.734: INFO: (15) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 5.073747ms) Mar 8 13:36:37.737: INFO: (16) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 3.42782ms) Mar 8 13:36:37.738: INFO: (16) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 3.550169ms) Mar 8 13:36:37.738: INFO: (16) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 3.542076ms) Mar 8 13:36:37.738: INFO: (16) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 3.846798ms) Mar 8 13:36:37.738: INFO: (16) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 3.800806ms) Mar 8 13:36:37.738: INFO: (16) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 3.805102ms) Mar 8 13:36:37.738: INFO: (16) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: ... (200; 3.416765ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 3.427576ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 3.398861ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 3.456311ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 3.518165ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 3.78216ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 3.73809ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 3.769934ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 4.169428ms) Mar 8 13:36:37.743: INFO: (17) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test (200; 4.366876ms) Mar 8 13:36:37.744: INFO: (17) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 4.462658ms) Mar 8 13:36:37.744: INFO: (17) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 4.50486ms) Mar 8 13:36:37.744: INFO: (17) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 4.541589ms) Mar 8 13:36:37.749: INFO: (18) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 5.173372ms) Mar 8 13:36:37.749: INFO: (18) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 5.156066ms) Mar 8 13:36:37.749: INFO: (18) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 5.19651ms) Mar 8 13:36:37.749: INFO: (18) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 5.126495ms) Mar 8 13:36:37.749: INFO: (18) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 5.433624ms) Mar 8 13:36:37.749: INFO: (18) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 5.387813ms) Mar 8 13:36:37.749: INFO: (18) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 5.564435ms) Mar 8 13:36:37.750: INFO: (18) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 5.674014ms) Mar 8 13:36:37.750: INFO: (18) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: test<... (200; 5.9222ms) Mar 8 13:36:37.750: INFO: (18) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 5.905051ms) Mar 8 13:36:37.750: INFO: (18) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:1080/proxy/: ... (200; 5.961824ms) Mar 8 13:36:37.750: INFO: (18) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 6.007211ms) Mar 8 13:36:37.750: INFO: (18) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 6.011478ms) Mar 8 13:36:37.750: INFO: (18) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 6.151956ms) Mar 8 13:36:37.754: INFO: (19) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm/proxy/: test (200; 4.007507ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname1/proxy/: tls baz (200; 4.385731ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname2/proxy/: bar (200; 4.355893ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 4.404991ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/services/https:proxy-service-sw95c:tlsportname2/proxy/: tls qux (200; 4.48588ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname2/proxy/: bar (200; 4.447316ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:462/proxy/: tls qux (200; 4.503955ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:162/proxy/: bar (200; 4.451877ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/services/http:proxy-service-sw95c:portname1/proxy/: foo (200; 4.427473ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:443/proxy/: ... (200; 4.474727ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/pods/http:proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 4.765609ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/services/proxy-service-sw95c:portname1/proxy/: foo (200; 4.736578ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:1080/proxy/: test<... (200; 4.896883ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/pods/proxy-service-sw95c-fvxdm:160/proxy/: foo (200; 4.998889ms) Mar 8 13:36:37.755: INFO: (19) /api/v1/namespaces/proxy-4854/pods/https:proxy-service-sw95c-fvxdm:460/proxy/: tls baz (200; 5.032583ms) STEP: deleting ReplicationController proxy-service-sw95c in namespace proxy-4854, will wait for the garbage collector to delete the pods Mar 8 13:36:37.812: INFO: Deleting ReplicationController proxy-service-sw95c took: 4.647714ms Mar 8 13:36:38.112: INFO: Terminating ReplicationController proxy-service-sw95c pods took: 300.198533ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:39.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4854" for this suite. • [SLOW TEST:7.239 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":78,"skipped":1220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:39.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:36:40.206: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:36:43.291: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:36:43.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6985-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:36:44.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7610" for this suite. STEP: Destroying namespace "webhook-7610-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":79,"skipped":1270,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:36:44.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 8 13:36:44.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5685' Mar 8 13:36:44.937: INFO: stderr: "" Mar 8 13:36:44.937: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 13:36:44.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5685' Mar 8 13:36:45.010: INFO: stderr: "" Mar 8 13:36:45.010: INFO: stdout: "update-demo-nautilus-6s2kt update-demo-nautilus-mzwtt " Mar 8 13:36:45.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s2kt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:36:45.077: INFO: stderr: "" Mar 8 13:36:45.078: INFO: stdout: "" Mar 8 13:36:45.078: INFO: update-demo-nautilus-6s2kt is created but not running Mar 8 13:36:50.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5685' Mar 8 13:36:50.217: INFO: stderr: "" Mar 8 13:36:50.217: INFO: stdout: "update-demo-nautilus-6s2kt update-demo-nautilus-mzwtt " Mar 8 13:36:50.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s2kt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:36:50.334: INFO: stderr: "" Mar 8 13:36:50.334: INFO: stdout: "true" Mar 8 13:36:50.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s2kt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:36:50.437: INFO: stderr: "" Mar 8 13:36:50.437: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:36:50.437: INFO: validating pod update-demo-nautilus-6s2kt Mar 8 13:36:50.442: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:36:50.442: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:36:50.442: INFO: update-demo-nautilus-6s2kt is verified up and running Mar 8 13:36:50.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzwtt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:36:50.553: INFO: stderr: "" Mar 8 13:36:50.553: INFO: stdout: "true" Mar 8 13:36:50.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzwtt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:36:50.666: INFO: stderr: "" Mar 8 13:36:50.666: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:36:50.666: INFO: validating pod update-demo-nautilus-mzwtt Mar 8 13:36:50.670: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:36:50.670: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:36:50.670: INFO: update-demo-nautilus-mzwtt is verified up and running STEP: scaling down the replication controller Mar 8 13:36:50.673: INFO: scanned /root for discovery docs: Mar 8 13:36:50.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5685' Mar 8 13:36:51.791: INFO: stderr: "" Mar 8 13:36:51.791: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 13:36:51.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5685' Mar 8 13:36:51.913: INFO: stderr: "" Mar 8 13:36:51.913: INFO: stdout: "update-demo-nautilus-6s2kt update-demo-nautilus-mzwtt " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 13:36:56.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5685' Mar 8 13:36:57.050: INFO: stderr: "" Mar 8 13:36:57.050: INFO: stdout: "update-demo-nautilus-6s2kt update-demo-nautilus-mzwtt " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 13:37:02.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5685' Mar 8 13:37:02.184: INFO: stderr: "" Mar 8 13:37:02.184: INFO: stdout: "update-demo-nautilus-mzwtt " Mar 8 13:37:02.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzwtt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:37:02.301: INFO: stderr: "" Mar 8 13:37:02.301: INFO: stdout: "true" Mar 8 13:37:02.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzwtt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:37:02.408: INFO: stderr: "" Mar 8 13:37:02.408: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:37:02.408: INFO: validating pod update-demo-nautilus-mzwtt Mar 8 13:37:02.412: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:37:02.412: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:37:02.412: INFO: update-demo-nautilus-mzwtt is verified up and running STEP: scaling up the replication controller Mar 8 13:37:02.415: INFO: scanned /root for discovery docs: Mar 8 13:37:02.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5685' Mar 8 13:37:03.533: INFO: stderr: "" Mar 8 13:37:03.533: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 13:37:03.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5685' Mar 8 13:37:03.652: INFO: stderr: "" Mar 8 13:37:03.652: INFO: stdout: "update-demo-nautilus-9npsp update-demo-nautilus-mzwtt " Mar 8 13:37:03.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9npsp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:37:03.747: INFO: stderr: "" Mar 8 13:37:03.747: INFO: stdout: "" Mar 8 13:37:03.747: INFO: update-demo-nautilus-9npsp is created but not running Mar 8 13:37:08.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5685' Mar 8 13:37:08.885: INFO: stderr: "" Mar 8 13:37:08.885: INFO: stdout: "update-demo-nautilus-9npsp update-demo-nautilus-mzwtt " Mar 8 13:37:08.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9npsp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:37:09.007: INFO: stderr: "" Mar 8 13:37:09.007: INFO: stdout: "true" Mar 8 13:37:09.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9npsp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:37:09.117: INFO: stderr: "" Mar 8 13:37:09.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:37:09.117: INFO: validating pod update-demo-nautilus-9npsp Mar 8 13:37:09.122: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:37:09.122: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:37:09.122: INFO: update-demo-nautilus-9npsp is verified up and running Mar 8 13:37:09.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzwtt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:37:09.226: INFO: stderr: "" Mar 8 13:37:09.226: INFO: stdout: "true" Mar 8 13:37:09.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mzwtt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5685' Mar 8 13:37:09.308: INFO: stderr: "" Mar 8 13:37:09.308: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:37:09.308: INFO: validating pod update-demo-nautilus-mzwtt Mar 8 13:37:09.311: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:37:09.311: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:37:09.311: INFO: update-demo-nautilus-mzwtt is verified up and running STEP: using delete to clean up resources Mar 8 13:37:09.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5685' Mar 8 13:37:09.395: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 13:37:09.395: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 13:37:09.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5685' Mar 8 13:37:09.508: INFO: stderr: "No resources found in kubectl-5685 namespace.\n" Mar 8 13:37:09.508: INFO: stdout: "" Mar 8 13:37:09.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5685 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 13:37:09.597: INFO: stderr: "" Mar 8 13:37:09.597: INFO: stdout: "update-demo-nautilus-9npsp\nupdate-demo-nautilus-mzwtt\n" Mar 8 13:37:10.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5685' Mar 8 13:37:10.235: INFO: stderr: "No resources found in kubectl-5685 namespace.\n" Mar 8 13:37:10.235: INFO: stdout: "" Mar 8 13:37:10.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5685 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 13:37:10.337: INFO: stderr: "" Mar 8 13:37:10.337: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:10.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5685" for this suite. • [SLOW TEST:25.779 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":80,"skipped":1276,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:10.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 8 13:37:10.407: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 8 13:37:17.435: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:17.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9550" for this suite. • [SLOW TEST:7.102 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:17.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 13:37:17.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9940' Mar 8 13:37:17.635: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 13:37:17.635: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Mar 8 13:37:17.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-9940' Mar 8 13:37:17.793: INFO: stderr: "" Mar 8 13:37:17.793: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:17.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9940" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":82,"skipped":1305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:17.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 13:37:17.847: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 13:37:17.871: INFO: Waiting for terminating namespaces to be deleted... Mar 8 13:37:17.874: INFO: Logging pods the kubelet thinks is on node kind-worker before test Mar 8 13:37:17.878: INFO: kindnet-p9whg from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:37:17.878: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 13:37:17.878: INFO: kube-proxy-pz8tf from kube-system started at 2020-03-08 12:58:54 +0000 UTC (1 container statuses recorded) Mar 8 13:37:17.878: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 13:37:17.878: INFO: e2e-test-httpd-job-mmf64 from kubectl-9940 started at 2020-03-08 13:37:17 +0000 UTC (1 container statuses recorded) Mar 8 13:37:17.878: INFO: Container e2e-test-httpd-job ready: false, restart count 0 Mar 8 13:37:17.878: INFO: Logging pods the kubelet thinks is on node kind-worker2 before test Mar 8 13:37:17.882: INFO: kindnet-mjfxb from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:37:17.882: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 13:37:17.882: INFO: update-demo-nautilus-mzwtt from kubectl-5685 started at 2020-03-08 13:36:45 +0000 UTC (1 container statuses recorded) Mar 8 13:37:17.882: INFO: Container update-demo ready: false, restart count 0 Mar 8 13:37:17.882: INFO: kube-proxy-vfcnx from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:37:17.882: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 13:37:17.882: INFO: busybox-scheduling-c6b304b4-3274-4053-a9d1-00315e0c87c5 from kubelet-test-67 started at 2020-03-08 13:36:28 +0000 UTC (1 container statuses recorded) Mar 8 13:37:17.882: INFO: Container busybox-scheduling-c6b304b4-3274-4053-a9d1-00315e0c87c5 ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fa57b148de6bbb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:18.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4173" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":83,"skipped":1344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:18.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 8 13:37:19.000: INFO: Waiting up to 5m0s for pod "pod-2d99ee60-5397-47d4-8b1e-dd58c5d82ffe" in namespace "emptydir-6759" to be "success or failure" Mar 8 13:37:19.023: INFO: Pod "pod-2d99ee60-5397-47d4-8b1e-dd58c5d82ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 23.170267ms Mar 8 13:37:21.027: INFO: Pod "pod-2d99ee60-5397-47d4-8b1e-dd58c5d82ffe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026559987s STEP: Saw pod success Mar 8 13:37:21.027: INFO: Pod "pod-2d99ee60-5397-47d4-8b1e-dd58c5d82ffe" satisfied condition "success or failure" Mar 8 13:37:21.029: INFO: Trying to get logs from node kind-worker2 pod pod-2d99ee60-5397-47d4-8b1e-dd58c5d82ffe container test-container: STEP: delete the pod Mar 8 13:37:21.048: INFO: Waiting for pod pod-2d99ee60-5397-47d4-8b1e-dd58c5d82ffe to disappear Mar 8 13:37:21.052: INFO: Pod pod-2d99ee60-5397-47d4-8b1e-dd58c5d82ffe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:21.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6759" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:21.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:23.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1694" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":85,"skipped":1449,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:23.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 8 13:37:27.820: INFO: Successfully updated pod "annotationupdatea7103e76-a55a-46f3-95fd-422ce5a3e9a6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:29.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8367" for this suite. • [SLOW TEST:6.641 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:29.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:37:29.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc22dd0c-8b27-45b7-b7d1-b80f0d72347d" in namespace "downward-api-1379" to be "success or failure" Mar 8 13:37:29.924: INFO: Pod "downwardapi-volume-dc22dd0c-8b27-45b7-b7d1-b80f0d72347d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.598391ms Mar 8 13:37:31.929: INFO: Pod "downwardapi-volume-dc22dd0c-8b27-45b7-b7d1-b80f0d72347d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006957582s STEP: Saw pod success Mar 8 13:37:31.929: INFO: Pod "downwardapi-volume-dc22dd0c-8b27-45b7-b7d1-b80f0d72347d" satisfied condition "success or failure" Mar 8 13:37:31.932: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-dc22dd0c-8b27-45b7-b7d1-b80f0d72347d container client-container: STEP: delete the pod Mar 8 13:37:31.962: INFO: Waiting for pod downwardapi-volume-dc22dd0c-8b27-45b7-b7d1-b80f0d72347d to disappear Mar 8 13:37:31.972: INFO: Pod downwardapi-volume-dc22dd0c-8b27-45b7-b7d1-b80f0d72347d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:31.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1379" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1481,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:31.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 8 13:37:32.044: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:32.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8896" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":88,"skipped":1489,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:32.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:49.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5908" for this suite. • [SLOW TEST:17.131 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":89,"skipped":1490,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:49.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-92bbe1fa-1d5b-4ade-95cf-dedb3b003659 STEP: Creating a pod to test consume secrets Mar 8 13:37:49.354: INFO: Waiting up to 5m0s for pod "pod-secrets-783bd489-4b19-4521-8c4d-d12122a65986" in namespace "secrets-831" to be "success or failure" Mar 8 13:37:49.386: INFO: Pod "pod-secrets-783bd489-4b19-4521-8c4d-d12122a65986": Phase="Pending", Reason="", readiness=false. Elapsed: 32.112765ms Mar 8 13:37:51.391: INFO: Pod "pod-secrets-783bd489-4b19-4521-8c4d-d12122a65986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036371797s STEP: Saw pod success Mar 8 13:37:51.391: INFO: Pod "pod-secrets-783bd489-4b19-4521-8c4d-d12122a65986" satisfied condition "success or failure" Mar 8 13:37:51.394: INFO: Trying to get logs from node kind-worker pod pod-secrets-783bd489-4b19-4521-8c4d-d12122a65986 container secret-volume-test: STEP: delete the pod Mar 8 13:37:51.431: INFO: Waiting for pod pod-secrets-783bd489-4b19-4521-8c4d-d12122a65986 to disappear Mar 8 13:37:51.436: INFO: Pod pod-secrets-783bd489-4b19-4521-8c4d-d12122a65986 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:51.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-831" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:51.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:37:51.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5195344f-6b95-443a-9857-6ee6bb56d852" in namespace "projected-3921" to be "success or failure" Mar 8 13:37:51.496: INFO: Pod "downwardapi-volume-5195344f-6b95-443a-9857-6ee6bb56d852": Phase="Pending", Reason="", readiness=false. Elapsed: 3.841944ms Mar 8 13:37:53.499: INFO: Pod "downwardapi-volume-5195344f-6b95-443a-9857-6ee6bb56d852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007220751s STEP: Saw pod success Mar 8 13:37:53.499: INFO: Pod "downwardapi-volume-5195344f-6b95-443a-9857-6ee6bb56d852" satisfied condition "success or failure" Mar 8 13:37:53.502: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-5195344f-6b95-443a-9857-6ee6bb56d852 container client-container: STEP: delete the pod Mar 8 13:37:53.539: INFO: Waiting for pod downwardapi-volume-5195344f-6b95-443a-9857-6ee6bb56d852 to disappear Mar 8 13:37:53.543: INFO: Pod downwardapi-volume-5195344f-6b95-443a-9857-6ee6bb56d852 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:53.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3921" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1527,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:53.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-58cbbd24-5d0f-401e-aa8f-2147167ea3aa STEP: Creating a pod to test consume secrets Mar 8 13:37:53.603: INFO: Waiting up to 5m0s for pod "pod-secrets-131468f3-f61e-49cd-aa64-f849611bd5e1" in namespace "secrets-3755" to be "success or failure" Mar 8 13:37:53.607: INFO: Pod "pod-secrets-131468f3-f61e-49cd-aa64-f849611bd5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.692297ms Mar 8 13:37:55.611: INFO: Pod "pod-secrets-131468f3-f61e-49cd-aa64-f849611bd5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008771187s Mar 8 13:37:57.616: INFO: Pod "pod-secrets-131468f3-f61e-49cd-aa64-f849611bd5e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01307134s STEP: Saw pod success Mar 8 13:37:57.616: INFO: Pod "pod-secrets-131468f3-f61e-49cd-aa64-f849611bd5e1" satisfied condition "success or failure" Mar 8 13:37:57.618: INFO: Trying to get logs from node kind-worker pod pod-secrets-131468f3-f61e-49cd-aa64-f849611bd5e1 container secret-volume-test: STEP: delete the pod Mar 8 13:37:57.651: INFO: Waiting for pod pod-secrets-131468f3-f61e-49cd-aa64-f849611bd5e1 to disappear Mar 8 13:37:57.655: INFO: Pod pod-secrets-131468f3-f61e-49cd-aa64-f849611bd5e1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:37:57.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3755" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1549,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:37:57.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 8 13:37:57.717: INFO: PodSpec: initContainers in spec.initContainers Mar 8 13:38:48.706: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4d7d186c-b734-45a1-a0a8-5043601f305f", GenerateName:"", Namespace:"init-container-7193", SelfLink:"/api/v1/namespaces/init-container-7193/pods/pod-init-4d7d186c-b734-45a1-a0a8-5043601f305f", UID:"c80406e7-5275-44f2-8b69-7ee525d549f5", ResourceVersion:"14776", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719271477, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"717292891"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-p9tdd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000b200c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p9tdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p9tdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p9tdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c74be8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028e7380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c74c70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c74c90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c74c98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c74c9c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271477, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271477, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271477, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271477, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.92", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.92"}}, StartTime:(*v1.Time)(0xc002046060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a909a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a90a10)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b231f8a192508b2e7b4af6d7d2eb7a5ad07f6b97f843275352b23b2ad7927a86", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0020460a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002046080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002c74d1f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:38:48.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7193" for this suite. • [SLOW TEST:51.047 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":93,"skipped":1564,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:38:48.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-b8ae72e1-139f-4e4e-8bfc-cec23e54b31e STEP: Creating a pod to test consume secrets Mar 8 13:38:48.779: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f83fc9d6-d43d-425a-88f9-886e81b0ffea" in namespace "projected-2478" to be "success or failure" Mar 8 13:38:48.783: INFO: Pod "pod-projected-secrets-f83fc9d6-d43d-425a-88f9-886e81b0ffea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.800644ms Mar 8 13:38:50.819: INFO: Pod "pod-projected-secrets-f83fc9d6-d43d-425a-88f9-886e81b0ffea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040404887s STEP: Saw pod success Mar 8 13:38:50.819: INFO: Pod "pod-projected-secrets-f83fc9d6-d43d-425a-88f9-886e81b0ffea" satisfied condition "success or failure" Mar 8 13:38:50.822: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-f83fc9d6-d43d-425a-88f9-886e81b0ffea container projected-secret-volume-test: STEP: delete the pod Mar 8 13:38:50.854: INFO: Waiting for pod pod-projected-secrets-f83fc9d6-d43d-425a-88f9-886e81b0ffea to disappear Mar 8 13:38:50.862: INFO: Pod pod-projected-secrets-f83fc9d6-d43d-425a-88f9-886e81b0ffea no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:38:50.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2478" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:38:50.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 13:38:50.934: INFO: Waiting up to 5m0s for pod "downward-api-81a39ba1-f7c2-4df3-9eed-3cbaed274b70" in namespace "downward-api-4409" to be "success or failure" Mar 8 13:38:50.940: INFO: Pod "downward-api-81a39ba1-f7c2-4df3-9eed-3cbaed274b70": Phase="Pending", Reason="", readiness=false. Elapsed: 5.86052ms Mar 8 13:38:52.944: INFO: Pod "downward-api-81a39ba1-f7c2-4df3-9eed-3cbaed274b70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009747805s STEP: Saw pod success Mar 8 13:38:52.944: INFO: Pod "downward-api-81a39ba1-f7c2-4df3-9eed-3cbaed274b70" satisfied condition "success or failure" Mar 8 13:38:52.948: INFO: Trying to get logs from node kind-worker pod downward-api-81a39ba1-f7c2-4df3-9eed-3cbaed274b70 container dapi-container: STEP: delete the pod Mar 8 13:38:52.982: INFO: Waiting for pod downward-api-81a39ba1-f7c2-4df3-9eed-3cbaed274b70 to disappear Mar 8 13:38:52.988: INFO: Pod downward-api-81a39ba1-f7c2-4df3-9eed-3cbaed274b70 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:38:52.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4409" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:38:53.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:39:53.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5464" for this suite. • [SLOW TEST:60.055 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1648,"failed":0} [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:39:53.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 13:39:53.158: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:39:53.163: INFO: Number of nodes with available pods: 0 Mar 8 13:39:53.163: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:39:54.167: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:39:54.169: INFO: Number of nodes with available pods: 0 Mar 8 13:39:54.169: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:39:55.167: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:39:55.170: INFO: Number of nodes with available pods: 2 Mar 8 13:39:55.170: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 8 13:39:55.186: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:39:55.191: INFO: Number of nodes with available pods: 2 Mar 8 13:39:55.191: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-872, will wait for the garbage collector to delete the pods Mar 8 13:39:56.273: INFO: Deleting DaemonSet.extensions daemon-set took: 5.637204ms Mar 8 13:39:56.573: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.306559ms Mar 8 13:40:09.476: INFO: Number of nodes with available pods: 0 Mar 8 13:40:09.476: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 13:40:09.478: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-872/daemonsets","resourceVersion":"15166"},"items":null} Mar 8 13:40:09.480: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-872/pods","resourceVersion":"15166"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:40:09.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-872" for this suite. • [SLOW TEST:16.446 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":97,"skipped":1648,"failed":0} [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:40:09.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:40:09.556: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:40:11.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2835" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1648,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:40:11.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:40:11.666: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02133f52-e3b9-430c-b535-4fc858d4fa28" in namespace "projected-3533" to be "success or failure" Mar 8 13:40:11.672: INFO: Pod "downwardapi-volume-02133f52-e3b9-430c-b535-4fc858d4fa28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.647299ms Mar 8 13:40:13.676: INFO: Pod "downwardapi-volume-02133f52-e3b9-430c-b535-4fc858d4fa28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010779322s STEP: Saw pod success Mar 8 13:40:13.677: INFO: Pod "downwardapi-volume-02133f52-e3b9-430c-b535-4fc858d4fa28" satisfied condition "success or failure" Mar 8 13:40:13.679: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-02133f52-e3b9-430c-b535-4fc858d4fa28 container client-container: STEP: delete the pod Mar 8 13:40:13.712: INFO: Waiting for pod downwardapi-volume-02133f52-e3b9-430c-b535-4fc858d4fa28 to disappear Mar 8 13:40:13.720: INFO: Pod downwardapi-volume-02133f52-e3b9-430c-b535-4fc858d4fa28 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:40:13.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3533" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:40:13.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:40:13.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7615" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":100,"skipped":1686,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:40:13.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-705aff9b-d8bd-459f-bf60-e10b04ef3afa STEP: Creating a pod to test consume secrets Mar 8 13:40:14.014: INFO: Waiting up to 5m0s for pod "pod-secrets-5a1573b8-2464-4357-a376-b0d99433fde3" in namespace "secrets-3937" to be "success or failure" Mar 8 13:40:14.018: INFO: Pod "pod-secrets-5a1573b8-2464-4357-a376-b0d99433fde3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121886ms Mar 8 13:40:16.022: INFO: Pod "pod-secrets-5a1573b8-2464-4357-a376-b0d99433fde3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007865901s STEP: Saw pod success Mar 8 13:40:16.022: INFO: Pod "pod-secrets-5a1573b8-2464-4357-a376-b0d99433fde3" satisfied condition "success or failure" Mar 8 13:40:16.025: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-5a1573b8-2464-4357-a376-b0d99433fde3 container secret-volume-test: STEP: delete the pod Mar 8 13:40:16.063: INFO: Waiting for pod pod-secrets-5a1573b8-2464-4357-a376-b0d99433fde3 to disappear Mar 8 13:40:16.066: INFO: Pod pod-secrets-5a1573b8-2464-4357-a376-b0d99433fde3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:40:16.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3937" for this suite. STEP: Destroying namespace "secret-namespace-3899" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1690,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:40:16.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 13:40:16.123: INFO: Waiting up to 5m0s for pod "downward-api-38e5edff-2f8a-4811-b9b8-06fd7d18f365" in namespace "downward-api-8676" to be "success or failure" Mar 8 13:40:16.126: INFO: Pod "downward-api-38e5edff-2f8a-4811-b9b8-06fd7d18f365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958628ms Mar 8 13:40:18.130: INFO: Pod "downward-api-38e5edff-2f8a-4811-b9b8-06fd7d18f365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007093493s STEP: Saw pod success Mar 8 13:40:18.130: INFO: Pod "downward-api-38e5edff-2f8a-4811-b9b8-06fd7d18f365" satisfied condition "success or failure" Mar 8 13:40:18.133: INFO: Trying to get logs from node kind-worker pod downward-api-38e5edff-2f8a-4811-b9b8-06fd7d18f365 container dapi-container: STEP: delete the pod Mar 8 13:40:18.175: INFO: Waiting for pod downward-api-38e5edff-2f8a-4811-b9b8-06fd7d18f365 to disappear Mar 8 13:40:18.180: INFO: Pod downward-api-38e5edff-2f8a-4811-b9b8-06fd7d18f365 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:40:18.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8676" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:40:18.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-6474a3cd-ff7a-4789-9a00-0d6fcafc2879 STEP: Creating secret with name s-test-opt-upd-4e325908-891b-497b-9209-7401f8fd3979 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6474a3cd-ff7a-4789-9a00-0d6fcafc2879 STEP: Updating secret s-test-opt-upd-4e325908-891b-497b-9209-7401f8fd3979 STEP: Creating secret with name s-test-opt-create-79916414-8b31-4d14-b861-f34998a64e79 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:41:42.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8105" for this suite. • [SLOW TEST:84.544 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:41:42.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-9z8s STEP: Creating a pod to test atomic-volume-subpath Mar 8 13:41:42.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9z8s" in namespace "subpath-7604" to be "success or failure" Mar 8 13:41:42.868: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Pending", Reason="", readiness=false. Elapsed: 24.125862ms Mar 8 13:41:44.871: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 2.028067063s Mar 8 13:41:46.875: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 4.031864991s Mar 8 13:41:48.879: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 6.035918266s Mar 8 13:41:50.883: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 8.039669287s Mar 8 13:41:52.887: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 10.043568602s Mar 8 13:41:54.891: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 12.047252879s Mar 8 13:41:56.894: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 14.050979196s Mar 8 13:41:58.898: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 16.055062266s Mar 8 13:42:00.902: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 18.058572848s Mar 8 13:42:02.906: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Running", Reason="", readiness=true. Elapsed: 20.062570988s Mar 8 13:42:04.910: INFO: Pod "pod-subpath-test-configmap-9z8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.066390545s STEP: Saw pod success Mar 8 13:42:04.910: INFO: Pod "pod-subpath-test-configmap-9z8s" satisfied condition "success or failure" Mar 8 13:42:04.913: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-9z8s container test-container-subpath-configmap-9z8s: STEP: delete the pod Mar 8 13:42:04.964: INFO: Waiting for pod pod-subpath-test-configmap-9z8s to disappear Mar 8 13:42:04.973: INFO: Pod pod-subpath-test-configmap-9z8s no longer exists STEP: Deleting pod pod-subpath-test-configmap-9z8s Mar 8 13:42:04.973: INFO: Deleting pod "pod-subpath-test-configmap-9z8s" in namespace "subpath-7604" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:04.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7604" for this suite. • [SLOW TEST:22.234 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":104,"skipped":1806,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:04.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 13:42:05.041: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 13:42:05.053: INFO: Waiting for terminating namespaces to be deleted... Mar 8 13:42:05.056: INFO: Logging pods the kubelet thinks is on node kind-worker before test Mar 8 13:42:05.061: INFO: kindnet-p9whg from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:42:05.061: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 13:42:05.061: INFO: kube-proxy-pz8tf from kube-system started at 2020-03-08 12:58:54 +0000 UTC (1 container statuses recorded) Mar 8 13:42:05.061: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 13:42:05.061: INFO: Logging pods the kubelet thinks is on node kind-worker2 before test Mar 8 13:42:05.066: INFO: kindnet-mjfxb from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:42:05.066: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 13:42:05.066: INFO: kube-proxy-vfcnx from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:42:05.066: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node kind-worker STEP: verifying the node has the label node kind-worker2 Mar 8 13:42:05.132: INFO: Pod kindnet-mjfxb requesting resource cpu=100m on Node kind-worker2 Mar 8 13:42:05.132: INFO: Pod kindnet-p9whg requesting resource cpu=100m on Node kind-worker Mar 8 13:42:05.132: INFO: Pod kube-proxy-pz8tf requesting resource cpu=0m on Node kind-worker Mar 8 13:42:05.132: INFO: Pod kube-proxy-vfcnx requesting resource cpu=0m on Node kind-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 8 13:42:05.132: INFO: Creating a pod which consumes cpu=11130m on Node kind-worker Mar 8 13:42:05.139: INFO: Creating a pod which consumes cpu=11130m on Node kind-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-da56a9ed-4e82-4615-80b3-2e2cd2658f76.15fa57f429edaa7b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7183/filler-pod-da56a9ed-4e82-4615-80b3-2e2cd2658f76 to kind-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-da56a9ed-4e82-4615-80b3-2e2cd2658f76.15fa57f4580234ae], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-da56a9ed-4e82-4615-80b3-2e2cd2658f76.15fa57f461fe5def], Reason = [Created], Message = [Created container filler-pod-da56a9ed-4e82-4615-80b3-2e2cd2658f76] STEP: Considering event: Type = [Normal], Name = [filler-pod-da56a9ed-4e82-4615-80b3-2e2cd2658f76.15fa57f46c63c282], Reason = [Started], Message = [Started container filler-pod-da56a9ed-4e82-4615-80b3-2e2cd2658f76] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee11a633-0eb3-4ad5-9502-cdd367dbe007.15fa57f429eb81b7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7183/filler-pod-ee11a633-0eb3-4ad5-9502-cdd367dbe007 to kind-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee11a633-0eb3-4ad5-9502-cdd367dbe007.15fa57f45956149c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee11a633-0eb3-4ad5-9502-cdd367dbe007.15fa57f4636d71ef], Reason = [Created], Message = [Created container filler-pod-ee11a633-0eb3-4ad5-9502-cdd367dbe007] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee11a633-0eb3-4ad5-9502-cdd367dbe007.15fa57f46dd5e636], Reason = [Started], Message = [Started container filler-pod-ee11a633-0eb3-4ad5-9502-cdd367dbe007] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fa57f4a215d19f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kind-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kind-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:08.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7183" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":105,"skipped":1822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:08.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:24.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7229" for this suite. • [SLOW TEST:16.196 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":106,"skipped":1866,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:24.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 8 13:42:26.619: INFO: &Pod{ObjectMeta:{send-events-4fae643f-9990-4c44-971d-dd7253dec573 events-1155 /api/v1/namespaces/events-1155/pods/send-events-4fae643f-9990-4c44-971d-dd7253dec573 ff7ab619-9330-4924-9f01-2d193baec223 15892 0 2020-03-08 13:42:24 +0000 UTC map[name:foo time:596432125] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kqs2d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kqs2d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kqs2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:42:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:42:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:42:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:42:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.102,StartTime:2020-03-08 13:42:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 13:42:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://0092d482e08f38c3137ffc3417997a9b5a8d9e561ff9392645bafb076476046c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 8 13:42:28.623: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 8 13:42:30.628: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:30.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1155" for this suite. • [SLOW TEST:6.211 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":107,"skipped":1884,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:30.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:42:31.370: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 13:42:33.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271751, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271751, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271751, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271751, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:42:36.445: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:36.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4149" for this suite. STEP: Destroying namespace "webhook-4149-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.977 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":108,"skipped":1885,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:36.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 8 13:42:36.680: INFO: Waiting up to 5m0s for pod "client-containers-96741846-290b-4ec2-b0a8-2a2c30112ef0" in namespace "containers-6011" to be "success or failure" Mar 8 13:42:36.686: INFO: Pod "client-containers-96741846-290b-4ec2-b0a8-2a2c30112ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26811ms Mar 8 13:42:38.690: INFO: Pod "client-containers-96741846-290b-4ec2-b0a8-2a2c30112ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01033674s STEP: Saw pod success Mar 8 13:42:38.690: INFO: Pod "client-containers-96741846-290b-4ec2-b0a8-2a2c30112ef0" satisfied condition "success or failure" Mar 8 13:42:38.694: INFO: Trying to get logs from node kind-worker pod client-containers-96741846-290b-4ec2-b0a8-2a2c30112ef0 container test-container: STEP: delete the pod Mar 8 13:42:38.734: INFO: Waiting for pod client-containers-96741846-290b-4ec2-b0a8-2a2c30112ef0 to disappear Mar 8 13:42:38.740: INFO: Pod client-containers-96741846-290b-4ec2-b0a8-2a2c30112ef0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:38.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6011" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1896,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:38.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-23409102-7020-4a5c-a733-5f37b406b48c STEP: Creating a pod to test consume configMaps Mar 8 13:42:38.857: INFO: Waiting up to 5m0s for pod "pod-configmaps-af3c7de2-7de3-48e0-b459-d9099b4a2699" in namespace "configmap-8683" to be "success or failure" Mar 8 13:42:38.879: INFO: Pod "pod-configmaps-af3c7de2-7de3-48e0-b459-d9099b4a2699": Phase="Pending", Reason="", readiness=false. Elapsed: 22.11322ms Mar 8 13:42:40.883: INFO: Pod "pod-configmaps-af3c7de2-7de3-48e0-b459-d9099b4a2699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026080338s Mar 8 13:42:42.887: INFO: Pod "pod-configmaps-af3c7de2-7de3-48e0-b459-d9099b4a2699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030010273s STEP: Saw pod success Mar 8 13:42:42.887: INFO: Pod "pod-configmaps-af3c7de2-7de3-48e0-b459-d9099b4a2699" satisfied condition "success or failure" Mar 8 13:42:42.890: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-af3c7de2-7de3-48e0-b459-d9099b4a2699 container configmap-volume-test: STEP: delete the pod Mar 8 13:42:42.910: INFO: Waiting for pod pod-configmaps-af3c7de2-7de3-48e0-b459-d9099b4a2699 to disappear Mar 8 13:42:42.915: INFO: Pod pod-configmaps-af3c7de2-7de3-48e0-b459-d9099b4a2699 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:42.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8683" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1898,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:42.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 8 13:42:42.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1532' Mar 8 13:42:45.014: INFO: stderr: "" Mar 8 13:42:45.014: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 13:42:45.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1532' Mar 8 13:42:45.135: INFO: stderr: "" Mar 8 13:42:45.135: INFO: stdout: "update-demo-nautilus-7krpz update-demo-nautilus-smq7x " Mar 8 13:42:45.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7krpz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1532' Mar 8 13:42:45.244: INFO: stderr: "" Mar 8 13:42:45.244: INFO: stdout: "" Mar 8 13:42:45.244: INFO: update-demo-nautilus-7krpz is created but not running Mar 8 13:42:50.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1532' Mar 8 13:42:50.356: INFO: stderr: "" Mar 8 13:42:50.356: INFO: stdout: "update-demo-nautilus-7krpz update-demo-nautilus-smq7x " Mar 8 13:42:50.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7krpz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1532' Mar 8 13:42:50.434: INFO: stderr: "" Mar 8 13:42:50.435: INFO: stdout: "true" Mar 8 13:42:50.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7krpz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1532' Mar 8 13:42:50.506: INFO: stderr: "" Mar 8 13:42:50.506: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:42:50.506: INFO: validating pod update-demo-nautilus-7krpz Mar 8 13:42:50.509: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:42:50.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:42:50.509: INFO: update-demo-nautilus-7krpz is verified up and running Mar 8 13:42:50.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smq7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1532' Mar 8 13:42:50.576: INFO: stderr: "" Mar 8 13:42:50.576: INFO: stdout: "true" Mar 8 13:42:50.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smq7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1532' Mar 8 13:42:50.638: INFO: stderr: "" Mar 8 13:42:50.638: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 13:42:50.638: INFO: validating pod update-demo-nautilus-smq7x Mar 8 13:42:50.641: INFO: got data: { "image": "nautilus.jpg" } Mar 8 13:42:50.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 13:42:50.641: INFO: update-demo-nautilus-smq7x is verified up and running STEP: using delete to clean up resources Mar 8 13:42:50.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1532' Mar 8 13:42:50.718: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 13:42:50.718: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 13:42:50.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1532' Mar 8 13:42:50.799: INFO: stderr: "No resources found in kubectl-1532 namespace.\n" Mar 8 13:42:50.799: INFO: stdout: "" Mar 8 13:42:50.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1532 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 13:42:50.888: INFO: stderr: "" Mar 8 13:42:50.888: INFO: stdout: "update-demo-nautilus-7krpz\nupdate-demo-nautilus-smq7x\n" Mar 8 13:42:51.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1532' Mar 8 13:42:51.481: INFO: stderr: "No resources found in kubectl-1532 namespace.\n" Mar 8 13:42:51.481: INFO: stdout: "" Mar 8 13:42:51.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1532 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 13:42:51.592: INFO: stderr: "" Mar 8 13:42:51.592: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:51.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1532" for this suite. • [SLOW TEST:8.678 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":111,"skipped":1917,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:51.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-6071 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6071 to expose endpoints map[] Mar 8 13:42:51.663: INFO: Get endpoints failed (2.208816ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 8 13:42:52.666: INFO: successfully validated that service multi-endpoint-test in namespace services-6071 exposes endpoints map[] (1.005639636s elapsed) STEP: Creating pod pod1 in namespace services-6071 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6071 to expose endpoints map[pod1:[100]] Mar 8 13:42:54.698: INFO: successfully validated that service multi-endpoint-test in namespace services-6071 exposes endpoints map[pod1:[100]] (2.026035167s elapsed) STEP: Creating pod pod2 in namespace services-6071 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6071 to expose endpoints map[pod1:[100] pod2:[101]] Mar 8 13:42:56.767: INFO: successfully validated that service multi-endpoint-test in namespace services-6071 exposes endpoints map[pod1:[100] pod2:[101]] (2.064643105s elapsed) STEP: Deleting pod pod1 in namespace services-6071 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6071 to expose endpoints map[pod2:[101]] Mar 8 13:42:56.809: INFO: successfully validated that service multi-endpoint-test in namespace services-6071 exposes endpoints map[pod2:[101]] (30.998454ms elapsed) STEP: Deleting pod pod2 in namespace services-6071 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6071 to expose endpoints map[] Mar 8 13:42:56.831: INFO: successfully validated that service multi-endpoint-test in namespace services-6071 exposes endpoints map[] (17.083116ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:42:56.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6071" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:5.253 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":112,"skipped":1930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:42:56.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-fwnv STEP: Creating a pod to test atomic-volume-subpath Mar 8 13:42:56.971: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fwnv" in namespace "subpath-1024" to be "success or failure" Mar 8 13:42:56.981: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.494395ms Mar 8 13:42:58.986: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014878016s Mar 8 13:43:00.990: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 4.019139635s Mar 8 13:43:02.994: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 6.023145104s Mar 8 13:43:04.998: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 8.02731712s Mar 8 13:43:07.002: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 10.031759131s Mar 8 13:43:09.006: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 12.0355691s Mar 8 13:43:11.010: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 14.038947044s Mar 8 13:43:13.014: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 16.043048495s Mar 8 13:43:15.017: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 18.046791937s Mar 8 13:43:17.021: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 20.050649977s Mar 8 13:43:19.025: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Running", Reason="", readiness=true. Elapsed: 22.054526869s Mar 8 13:43:21.029: INFO: Pod "pod-subpath-test-projected-fwnv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05823726s STEP: Saw pod success Mar 8 13:43:21.029: INFO: Pod "pod-subpath-test-projected-fwnv" satisfied condition "success or failure" Mar 8 13:43:21.032: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-projected-fwnv container test-container-subpath-projected-fwnv: STEP: delete the pod Mar 8 13:43:21.049: INFO: Waiting for pod pod-subpath-test-projected-fwnv to disappear Mar 8 13:43:21.069: INFO: Pod pod-subpath-test-projected-fwnv no longer exists STEP: Deleting pod pod-subpath-test-projected-fwnv Mar 8 13:43:21.069: INFO: Deleting pod "pod-subpath-test-projected-fwnv" in namespace "subpath-1024" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:43:21.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1024" for this suite. • [SLOW TEST:24.224 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":113,"skipped":1958,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:43:21.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:43:21.119: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 13:43:24.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5458 create -f -' Mar 8 13:43:26.102: INFO: stderr: "" Mar 8 13:43:26.102: INFO: stdout: "e2e-test-crd-publish-openapi-1064-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 13:43:26.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5458 delete e2e-test-crd-publish-openapi-1064-crds test-cr' Mar 8 13:43:26.217: INFO: stderr: "" Mar 8 13:43:26.217: INFO: stdout: "e2e-test-crd-publish-openapi-1064-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 8 13:43:26.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5458 apply -f -' Mar 8 13:43:26.450: INFO: stderr: "" Mar 8 13:43:26.450: INFO: stdout: "e2e-test-crd-publish-openapi-1064-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 8 13:43:26.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5458 delete e2e-test-crd-publish-openapi-1064-crds test-cr' Mar 8 13:43:26.570: INFO: stderr: "" Mar 8 13:43:26.570: INFO: stdout: "e2e-test-crd-publish-openapi-1064-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 13:43:26.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1064-crds' Mar 8 13:43:26.792: INFO: stderr: "" Mar 8 13:43:26.792: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1064-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:43:29.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5458" for this suite. • [SLOW TEST:8.764 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":114,"skipped":1960,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:43:29.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 13:43:29.902: INFO: Waiting up to 5m0s for pod "downward-api-c94fe223-8793-404b-80fc-b1a91bb89a87" in namespace "downward-api-6519" to be "success or failure" Mar 8 13:43:29.905: INFO: Pod "downward-api-c94fe223-8793-404b-80fc-b1a91bb89a87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014614ms Mar 8 13:43:31.908: INFO: Pod "downward-api-c94fe223-8793-404b-80fc-b1a91bb89a87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006537856s STEP: Saw pod success Mar 8 13:43:31.908: INFO: Pod "downward-api-c94fe223-8793-404b-80fc-b1a91bb89a87" satisfied condition "success or failure" Mar 8 13:43:31.911: INFO: Trying to get logs from node kind-worker2 pod downward-api-c94fe223-8793-404b-80fc-b1a91bb89a87 container dapi-container: STEP: delete the pod Mar 8 13:43:31.940: INFO: Waiting for pod downward-api-c94fe223-8793-404b-80fc-b1a91bb89a87 to disappear Mar 8 13:43:31.967: INFO: Pod downward-api-c94fe223-8793-404b-80fc-b1a91bb89a87 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:43:31.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6519" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1979,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:43:31.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:43:32.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1202" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:43:32.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:43:32.569: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:43:35.588: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:43:35.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-140-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:43:36.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4300" for this suite. STEP: Destroying namespace "webhook-4300-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":117,"skipped":2064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:43:37.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:43:37.065: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8584 I0308 13:43:37.078366 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8584, replica count: 1 I0308 13:43:38.128723 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 13:43:39.128912 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 13:43:39.254: INFO: Created: latency-svc-9rs8f Mar 8 13:43:39.260: INFO: Got endpoints: latency-svc-9rs8f [31.736058ms] Mar 8 13:43:39.284: INFO: Created: latency-svc-tm699 Mar 8 13:43:39.301: INFO: Got endpoints: latency-svc-tm699 [39.885514ms] Mar 8 13:43:39.318: INFO: Created: latency-svc-8pzpp Mar 8 13:43:39.350: INFO: Got endpoints: latency-svc-8pzpp [89.637137ms] Mar 8 13:43:39.367: INFO: Created: latency-svc-gz2bw Mar 8 13:43:39.371: INFO: Got endpoints: latency-svc-gz2bw [110.732485ms] Mar 8 13:43:39.391: INFO: Created: latency-svc-8cl8j Mar 8 13:43:39.402: INFO: Got endpoints: latency-svc-8cl8j [141.421118ms] Mar 8 13:43:39.409: INFO: Created: latency-svc-nznc4 Mar 8 13:43:39.414: INFO: Got endpoints: latency-svc-nznc4 [152.862785ms] Mar 8 13:43:39.426: INFO: Created: latency-svc-5448w Mar 8 13:43:39.432: INFO: Got endpoints: latency-svc-5448w [171.498984ms] Mar 8 13:43:39.470: INFO: Created: latency-svc-kfdgv Mar 8 13:43:39.486: INFO: Created: latency-svc-wfjhw Mar 8 13:43:39.487: INFO: Got endpoints: latency-svc-kfdgv [225.575887ms] Mar 8 13:43:39.491: INFO: Got endpoints: latency-svc-wfjhw [230.289262ms] Mar 8 13:43:39.510: INFO: Created: latency-svc-vztpz Mar 8 13:43:39.515: INFO: Got endpoints: latency-svc-vztpz [254.469064ms] Mar 8 13:43:39.528: INFO: Created: latency-svc-77hl2 Mar 8 13:43:39.533: INFO: Got endpoints: latency-svc-77hl2 [271.802398ms] Mar 8 13:43:39.546: INFO: Created: latency-svc-9xvrs Mar 8 13:43:39.551: INFO: Got endpoints: latency-svc-9xvrs [290.082185ms] Mar 8 13:43:39.589: INFO: Created: latency-svc-r65kl Mar 8 13:43:39.613: INFO: Got endpoints: latency-svc-r65kl [351.934823ms] Mar 8 13:43:39.613: INFO: Created: latency-svc-xfw45 Mar 8 13:43:39.637: INFO: Got endpoints: latency-svc-xfw45 [376.274387ms] Mar 8 13:43:39.661: INFO: Created: latency-svc-fwfqd Mar 8 13:43:39.664: INFO: Got endpoints: latency-svc-fwfqd [403.298386ms] Mar 8 13:43:39.684: INFO: Created: latency-svc-5pslg Mar 8 13:43:39.703: INFO: Got endpoints: latency-svc-5pslg [442.195755ms] Mar 8 13:43:39.714: INFO: Created: latency-svc-dftxb Mar 8 13:43:39.718: INFO: Got endpoints: latency-svc-dftxb [417.80537ms] Mar 8 13:43:39.738: INFO: Created: latency-svc-csrqx Mar 8 13:43:39.743: INFO: Got endpoints: latency-svc-csrqx [392.431376ms] Mar 8 13:43:39.762: INFO: Created: latency-svc-9z64q Mar 8 13:43:39.780: INFO: Got endpoints: latency-svc-9z64q [408.900918ms] Mar 8 13:43:39.798: INFO: Created: latency-svc-mrhfh Mar 8 13:43:39.829: INFO: Got endpoints: latency-svc-mrhfh [426.610482ms] Mar 8 13:43:39.830: INFO: Created: latency-svc-tbhnf Mar 8 13:43:39.839: INFO: Got endpoints: latency-svc-tbhnf [424.783277ms] Mar 8 13:43:39.858: INFO: Created: latency-svc-rz57h Mar 8 13:43:39.863: INFO: Got endpoints: latency-svc-rz57h [430.852425ms] Mar 8 13:43:39.876: INFO: Created: latency-svc-bf9x4 Mar 8 13:43:39.880: INFO: Got endpoints: latency-svc-bf9x4 [393.861495ms] Mar 8 13:43:39.895: INFO: Created: latency-svc-sg59l Mar 8 13:43:39.899: INFO: Got endpoints: latency-svc-sg59l [407.330873ms] Mar 8 13:43:39.912: INFO: Created: latency-svc-x9vtq Mar 8 13:43:39.949: INFO: Got endpoints: latency-svc-x9vtq [433.447537ms] Mar 8 13:43:39.959: INFO: Created: latency-svc-jpwxm Mar 8 13:43:39.964: INFO: Got endpoints: latency-svc-jpwxm [431.568866ms] Mar 8 13:43:39.984: INFO: Created: latency-svc-dxxjv Mar 8 13:43:39.988: INFO: Got endpoints: latency-svc-dxxjv [437.125763ms] Mar 8 13:43:40.002: INFO: Created: latency-svc-4gncb Mar 8 13:43:40.006: INFO: Got endpoints: latency-svc-4gncb [393.193935ms] Mar 8 13:43:40.026: INFO: Created: latency-svc-stv2w Mar 8 13:43:40.036: INFO: Got endpoints: latency-svc-stv2w [398.685354ms] Mar 8 13:43:40.068: INFO: Created: latency-svc-l4df9 Mar 8 13:43:40.080: INFO: Got endpoints: latency-svc-l4df9 [416.077322ms] Mar 8 13:43:40.104: INFO: Created: latency-svc-kw4v8 Mar 8 13:43:40.114: INFO: Got endpoints: latency-svc-kw4v8 [411.175381ms] Mar 8 13:43:40.134: INFO: Created: latency-svc-5jz5n Mar 8 13:43:40.144: INFO: Got endpoints: latency-svc-5jz5n [425.43299ms] Mar 8 13:43:40.163: INFO: Created: latency-svc-4dttf Mar 8 13:43:40.188: INFO: Got endpoints: latency-svc-4dttf [445.264817ms] Mar 8 13:43:40.189: INFO: Created: latency-svc-8c594 Mar 8 13:43:40.198: INFO: Got endpoints: latency-svc-8c594 [418.023797ms] Mar 8 13:43:40.217: INFO: Created: latency-svc-xwjsx Mar 8 13:43:40.236: INFO: Got endpoints: latency-svc-xwjsx [407.355308ms] Mar 8 13:43:40.254: INFO: Created: latency-svc-6vm6n Mar 8 13:43:40.259: INFO: Got endpoints: latency-svc-6vm6n [420.157118ms] Mar 8 13:43:40.278: INFO: Created: latency-svc-7fld4 Mar 8 13:43:40.282: INFO: Got endpoints: latency-svc-7fld4 [418.812835ms] Mar 8 13:43:40.320: INFO: Created: latency-svc-jrvzq Mar 8 13:43:40.339: INFO: Created: latency-svc-5n4h6 Mar 8 13:43:40.339: INFO: Got endpoints: latency-svc-jrvzq [458.417188ms] Mar 8 13:43:40.342: INFO: Got endpoints: latency-svc-5n4h6 [443.360032ms] Mar 8 13:43:40.363: INFO: Created: latency-svc-drnnj Mar 8 13:43:40.372: INFO: Got endpoints: latency-svc-drnnj [423.085709ms] Mar 8 13:43:40.392: INFO: Created: latency-svc-hdw2j Mar 8 13:43:40.395: INFO: Got endpoints: latency-svc-hdw2j [430.896775ms] Mar 8 13:43:40.409: INFO: Created: latency-svc-m4d28 Mar 8 13:43:40.416: INFO: Got endpoints: latency-svc-m4d28 [427.940126ms] Mar 8 13:43:40.439: INFO: Created: latency-svc-ndk97 Mar 8 13:43:40.458: INFO: Got endpoints: latency-svc-ndk97 [451.85171ms] Mar 8 13:43:40.458: INFO: Created: latency-svc-j692t Mar 8 13:43:40.470: INFO: Got endpoints: latency-svc-j692t [434.460369ms] Mar 8 13:43:40.482: INFO: Created: latency-svc-qhq6q Mar 8 13:43:40.492: INFO: Got endpoints: latency-svc-qhq6q [411.089312ms] Mar 8 13:43:40.512: INFO: Created: latency-svc-7pf6k Mar 8 13:43:40.536: INFO: Got endpoints: latency-svc-7pf6k [422.120172ms] Mar 8 13:43:40.571: INFO: Created: latency-svc-bh72d Mar 8 13:43:40.589: INFO: Got endpoints: latency-svc-bh72d [445.473341ms] Mar 8 13:43:40.590: INFO: Created: latency-svc-wqj5p Mar 8 13:43:40.593: INFO: Got endpoints: latency-svc-wqj5p [405.306384ms] Mar 8 13:43:40.613: INFO: Created: latency-svc-skq2b Mar 8 13:43:40.617: INFO: Got endpoints: latency-svc-skq2b [418.94672ms] Mar 8 13:43:40.638: INFO: Created: latency-svc-hzjmj Mar 8 13:43:40.641: INFO: Got endpoints: latency-svc-hzjmj [405.153349ms] Mar 8 13:43:40.661: INFO: Created: latency-svc-59ntw Mar 8 13:43:40.691: INFO: Got endpoints: latency-svc-59ntw [431.907625ms] Mar 8 13:43:40.704: INFO: Created: latency-svc-zwgk7 Mar 8 13:43:40.714: INFO: Got endpoints: latency-svc-zwgk7 [431.498908ms] Mar 8 13:43:40.729: INFO: Created: latency-svc-pc5bm Mar 8 13:43:40.733: INFO: Got endpoints: latency-svc-pc5bm [394.114375ms] Mar 8 13:43:40.746: INFO: Created: latency-svc-7kn77 Mar 8 13:43:40.750: INFO: Got endpoints: latency-svc-7kn77 [407.936986ms] Mar 8 13:43:40.770: INFO: Created: latency-svc-j585g Mar 8 13:43:40.817: INFO: Got endpoints: latency-svc-j585g [444.810695ms] Mar 8 13:43:40.830: INFO: Created: latency-svc-6n4p7 Mar 8 13:43:40.839: INFO: Got endpoints: latency-svc-6n4p7 [443.360117ms] Mar 8 13:43:40.860: INFO: Created: latency-svc-r457v Mar 8 13:43:40.871: INFO: Got endpoints: latency-svc-r457v [455.263926ms] Mar 8 13:43:40.884: INFO: Created: latency-svc-wlrl8 Mar 8 13:43:40.887: INFO: Got endpoints: latency-svc-wlrl8 [429.219977ms] Mar 8 13:43:40.908: INFO: Created: latency-svc-6p289 Mar 8 13:43:40.936: INFO: Got endpoints: latency-svc-6p289 [465.65968ms] Mar 8 13:43:40.956: INFO: Created: latency-svc-qw9sd Mar 8 13:43:40.965: INFO: Got endpoints: latency-svc-qw9sd [473.454727ms] Mar 8 13:43:40.986: INFO: Created: latency-svc-tdp8v Mar 8 13:43:40.989: INFO: Got endpoints: latency-svc-tdp8v [453.097401ms] Mar 8 13:43:41.005: INFO: Created: latency-svc-jv7hq Mar 8 13:43:41.007: INFO: Got endpoints: latency-svc-jv7hq [417.789801ms] Mar 8 13:43:41.027: INFO: Created: latency-svc-lwdj7 Mar 8 13:43:41.031: INFO: Got endpoints: latency-svc-lwdj7 [437.94013ms] Mar 8 13:43:41.058: INFO: Created: latency-svc-9psls Mar 8 13:43:41.061: INFO: Got endpoints: latency-svc-9psls [443.687023ms] Mar 8 13:43:41.081: INFO: Created: latency-svc-vbbgv Mar 8 13:43:41.091: INFO: Got endpoints: latency-svc-vbbgv [449.795104ms] Mar 8 13:43:41.111: INFO: Created: latency-svc-46th5 Mar 8 13:43:41.121: INFO: Got endpoints: latency-svc-46th5 [430.550837ms] Mar 8 13:43:41.141: INFO: Created: latency-svc-zjcbl Mar 8 13:43:41.151: INFO: Got endpoints: latency-svc-zjcbl [437.736469ms] Mar 8 13:43:41.182: INFO: Created: latency-svc-s2ffl Mar 8 13:43:41.187: INFO: Got endpoints: latency-svc-s2ffl [453.591761ms] Mar 8 13:43:41.208: INFO: Created: latency-svc-bwpwj Mar 8 13:43:41.218: INFO: Got endpoints: latency-svc-bwpwj [467.651581ms] Mar 8 13:43:41.239: INFO: Created: latency-svc-rdjn8 Mar 8 13:43:41.269: INFO: Created: latency-svc-ccffn Mar 8 13:43:41.270: INFO: Got endpoints: latency-svc-rdjn8 [452.906634ms] Mar 8 13:43:41.308: INFO: Created: latency-svc-tmz57 Mar 8 13:43:41.312: INFO: Got endpoints: latency-svc-ccffn [473.587911ms] Mar 8 13:43:41.333: INFO: Created: latency-svc-2jwc7 Mar 8 13:43:41.351: INFO: Created: latency-svc-9z5pf Mar 8 13:43:41.376: INFO: Created: latency-svc-4fkdp Mar 8 13:43:41.376: INFO: Got endpoints: latency-svc-tmz57 [504.420469ms] Mar 8 13:43:41.405: INFO: Created: latency-svc-rx5ts Mar 8 13:43:41.434: INFO: Got endpoints: latency-svc-2jwc7 [546.639583ms] Mar 8 13:43:41.448: INFO: Created: latency-svc-n784g Mar 8 13:43:41.457: INFO: Got endpoints: latency-svc-9z5pf [520.789153ms] Mar 8 13:43:41.478: INFO: Created: latency-svc-mf5hn Mar 8 13:43:41.496: INFO: Created: latency-svc-6wc75 Mar 8 13:43:41.514: INFO: Got endpoints: latency-svc-4fkdp [548.731926ms] Mar 8 13:43:41.514: INFO: Created: latency-svc-pg5h5 Mar 8 13:43:41.547: INFO: Created: latency-svc-6thqw Mar 8 13:43:41.573: INFO: Created: latency-svc-6lzgt Mar 8 13:43:41.574: INFO: Got endpoints: latency-svc-rx5ts [584.218714ms] Mar 8 13:43:41.591: INFO: Created: latency-svc-ztzfw Mar 8 13:43:41.609: INFO: Got endpoints: latency-svc-n784g [602.094399ms] Mar 8 13:43:41.609: INFO: Created: latency-svc-n7jtb Mar 8 13:43:41.628: INFO: Created: latency-svc-4vrcw Mar 8 13:43:41.645: INFO: Created: latency-svc-l7j2g Mar 8 13:43:41.679: INFO: Got endpoints: latency-svc-mf5hn [647.93031ms] Mar 8 13:43:41.684: INFO: Created: latency-svc-mnh5c Mar 8 13:43:41.712: INFO: Created: latency-svc-hlv88 Mar 8 13:43:41.712: INFO: Got endpoints: latency-svc-6wc75 [650.746943ms] Mar 8 13:43:41.760: INFO: Created: latency-svc-kvrqz Mar 8 13:43:41.760: INFO: Got endpoints: latency-svc-pg5h5 [668.987534ms] Mar 8 13:43:41.778: INFO: Created: latency-svc-7skql Mar 8 13:43:41.819: INFO: Got endpoints: latency-svc-6thqw [697.846578ms] Mar 8 13:43:41.820: INFO: Created: latency-svc-zp6rt Mar 8 13:43:41.898: INFO: Created: latency-svc-8dn2t Mar 8 13:43:41.898: INFO: Got endpoints: latency-svc-6lzgt [746.705628ms] Mar 8 13:43:41.928: INFO: Got endpoints: latency-svc-ztzfw [740.819431ms] Mar 8 13:43:41.952: INFO: Created: latency-svc-jtjfj Mar 8 13:43:41.970: INFO: Created: latency-svc-pl6mc Mar 8 13:43:41.970: INFO: Got endpoints: latency-svc-n7jtb [752.803851ms] Mar 8 13:43:41.988: INFO: Created: latency-svc-brt4p Mar 8 13:43:42.024: INFO: Got endpoints: latency-svc-4vrcw [754.255921ms] Mar 8 13:43:42.024: INFO: Created: latency-svc-c2psc Mar 8 13:43:42.065: INFO: Created: latency-svc-2jrns Mar 8 13:43:42.065: INFO: Got endpoints: latency-svc-l7j2g [752.905985ms] Mar 8 13:43:42.083: INFO: Created: latency-svc-pkf9s Mar 8 13:43:42.101: INFO: Created: latency-svc-cn8tx Mar 8 13:43:42.119: INFO: Created: latency-svc-mm9c2 Mar 8 13:43:42.119: INFO: Got endpoints: latency-svc-mnh5c [743.562224ms] Mar 8 13:43:42.138: INFO: Created: latency-svc-2zwcz Mar 8 13:43:42.176: INFO: Got endpoints: latency-svc-hlv88 [742.218244ms] Mar 8 13:43:42.177: INFO: Created: latency-svc-rzcz6 Mar 8 13:43:42.204: INFO: Created: latency-svc-h98q9 Mar 8 13:43:42.206: INFO: Got endpoints: latency-svc-kvrqz [749.130022ms] Mar 8 13:43:42.228: INFO: Created: latency-svc-j6qdv Mar 8 13:43:42.257: INFO: Got endpoints: latency-svc-7skql [742.681127ms] Mar 8 13:43:42.278: INFO: Created: latency-svc-pbnz7 Mar 8 13:43:42.317: INFO: Got endpoints: latency-svc-zp6rt [743.134941ms] Mar 8 13:43:42.347: INFO: Created: latency-svc-4j8qn Mar 8 13:43:42.356: INFO: Got endpoints: latency-svc-8dn2t [746.719907ms] Mar 8 13:43:42.389: INFO: Created: latency-svc-79dwl Mar 8 13:43:42.410: INFO: Got endpoints: latency-svc-jtjfj [730.283548ms] Mar 8 13:43:42.455: INFO: Created: latency-svc-zq6fw Mar 8 13:43:42.464: INFO: Got endpoints: latency-svc-pl6mc [751.738182ms] Mar 8 13:43:42.497: INFO: Created: latency-svc-n5trq Mar 8 13:43:42.506: INFO: Got endpoints: latency-svc-brt4p [745.450741ms] Mar 8 13:43:42.551: INFO: Created: latency-svc-p2tw6 Mar 8 13:43:42.569: INFO: Got endpoints: latency-svc-c2psc [750.194403ms] Mar 8 13:43:42.593: INFO: Created: latency-svc-mqw9z Mar 8 13:43:42.607: INFO: Got endpoints: latency-svc-2jrns [708.498095ms] Mar 8 13:43:42.655: INFO: Created: latency-svc-l4qdd Mar 8 13:43:42.658: INFO: Got endpoints: latency-svc-pkf9s [730.886294ms] Mar 8 13:43:42.677: INFO: Created: latency-svc-nd8jx Mar 8 13:43:42.706: INFO: Got endpoints: latency-svc-cn8tx [735.383031ms] Mar 8 13:43:42.724: INFO: Created: latency-svc-5n2rn Mar 8 13:43:42.774: INFO: Got endpoints: latency-svc-mm9c2 [750.479767ms] Mar 8 13:43:42.797: INFO: Created: latency-svc-xsfkj Mar 8 13:43:42.806: INFO: Got endpoints: latency-svc-2zwcz [740.585477ms] Mar 8 13:43:42.835: INFO: Created: latency-svc-p7wb2 Mar 8 13:43:42.857: INFO: Got endpoints: latency-svc-rzcz6 [737.056593ms] Mar 8 13:43:42.901: INFO: Created: latency-svc-rzhs7 Mar 8 13:43:42.907: INFO: Got endpoints: latency-svc-h98q9 [730.96446ms] Mar 8 13:43:42.941: INFO: Created: latency-svc-gb9m2 Mar 8 13:43:42.956: INFO: Got endpoints: latency-svc-j6qdv [750.082491ms] Mar 8 13:43:42.988: INFO: Created: latency-svc-lwjp4 Mar 8 13:43:43.020: INFO: Got endpoints: latency-svc-pbnz7 [763.552583ms] Mar 8 13:43:43.042: INFO: Created: latency-svc-hhrxt Mar 8 13:43:43.056: INFO: Got endpoints: latency-svc-4j8qn [739.40874ms] Mar 8 13:43:43.084: INFO: Created: latency-svc-2qkfv Mar 8 13:43:43.114: INFO: Got endpoints: latency-svc-79dwl [758.039481ms] Mar 8 13:43:43.162: INFO: Got endpoints: latency-svc-zq6fw [752.749884ms] Mar 8 13:43:43.163: INFO: Created: latency-svc-wmw9k Mar 8 13:43:43.187: INFO: Created: latency-svc-7blbd Mar 8 13:43:43.206: INFO: Got endpoints: latency-svc-n5trq [742.549321ms] Mar 8 13:43:43.229: INFO: Created: latency-svc-42k7w Mar 8 13:43:43.260: INFO: Got endpoints: latency-svc-p2tw6 [753.870185ms] Mar 8 13:43:43.284: INFO: Created: latency-svc-jlx26 Mar 8 13:43:43.306: INFO: Got endpoints: latency-svc-mqw9z [736.854699ms] Mar 8 13:43:43.327: INFO: Created: latency-svc-qlgnh Mar 8 13:43:43.356: INFO: Got endpoints: latency-svc-l4qdd [749.717549ms] Mar 8 13:43:43.390: INFO: Created: latency-svc-rmdbn Mar 8 13:43:43.406: INFO: Got endpoints: latency-svc-nd8jx [747.753839ms] Mar 8 13:43:43.426: INFO: Created: latency-svc-9jpl4 Mar 8 13:43:43.456: INFO: Got endpoints: latency-svc-5n2rn [750.508504ms] Mar 8 13:43:43.493: INFO: Created: latency-svc-jdzgg Mar 8 13:43:43.506: INFO: Got endpoints: latency-svc-xsfkj [731.893147ms] Mar 8 13:43:43.528: INFO: Created: latency-svc-b777d Mar 8 13:43:43.556: INFO: Got endpoints: latency-svc-p7wb2 [750.316405ms] Mar 8 13:43:43.582: INFO: Created: latency-svc-5bv2m Mar 8 13:43:43.607: INFO: Got endpoints: latency-svc-rzhs7 [750.112152ms] Mar 8 13:43:43.624: INFO: Created: latency-svc-2dztw Mar 8 13:43:43.656: INFO: Got endpoints: latency-svc-gb9m2 [749.327033ms] Mar 8 13:43:43.678: INFO: Created: latency-svc-xsf7q Mar 8 13:43:43.706: INFO: Got endpoints: latency-svc-lwjp4 [749.53359ms] Mar 8 13:43:43.737: INFO: Created: latency-svc-d65sn Mar 8 13:43:43.756: INFO: Got endpoints: latency-svc-hhrxt [736.233256ms] Mar 8 13:43:43.779: INFO: Created: latency-svc-qfnnk Mar 8 13:43:43.806: INFO: Got endpoints: latency-svc-2qkfv [750.120349ms] Mar 8 13:43:43.864: INFO: Created: latency-svc-6plr2 Mar 8 13:43:43.864: INFO: Got endpoints: latency-svc-wmw9k [749.373317ms] Mar 8 13:43:43.888: INFO: Created: latency-svc-lzf78 Mar 8 13:43:43.906: INFO: Got endpoints: latency-svc-7blbd [743.726481ms] Mar 8 13:43:43.924: INFO: Created: latency-svc-xk6fw Mar 8 13:43:43.960: INFO: Got endpoints: latency-svc-42k7w [753.727776ms] Mar 8 13:43:43.996: INFO: Created: latency-svc-js8lh Mar 8 13:43:44.006: INFO: Got endpoints: latency-svc-jlx26 [746.469154ms] Mar 8 13:43:44.026: INFO: Created: latency-svc-4cckb Mar 8 13:43:44.057: INFO: Got endpoints: latency-svc-qlgnh [750.413764ms] Mar 8 13:43:44.086: INFO: Created: latency-svc-mj76j Mar 8 13:43:44.106: INFO: Got endpoints: latency-svc-rmdbn [749.948586ms] Mar 8 13:43:44.133: INFO: Created: latency-svc-8cl5q Mar 8 13:43:44.156: INFO: Got endpoints: latency-svc-9jpl4 [750.070664ms] Mar 8 13:43:44.175: INFO: Created: latency-svc-6hbh6 Mar 8 13:43:44.206: INFO: Got endpoints: latency-svc-jdzgg [749.952647ms] Mar 8 13:43:44.229: INFO: Created: latency-svc-czvs9 Mar 8 13:43:44.256: INFO: Got endpoints: latency-svc-b777d [749.763937ms] Mar 8 13:43:44.290: INFO: Created: latency-svc-2m4f7 Mar 8 13:43:44.325: INFO: Got endpoints: latency-svc-5bv2m [768.97438ms] Mar 8 13:43:44.344: INFO: Created: latency-svc-kkwgc Mar 8 13:43:44.356: INFO: Got endpoints: latency-svc-2dztw [749.601657ms] Mar 8 13:43:44.380: INFO: Created: latency-svc-d6c2p Mar 8 13:43:44.406: INFO: Got endpoints: latency-svc-xsf7q [750.141444ms] Mar 8 13:43:44.451: INFO: Created: latency-svc-nhgrj Mar 8 13:43:44.459: INFO: Got endpoints: latency-svc-d65sn [752.853578ms] Mar 8 13:43:44.481: INFO: Created: latency-svc-sdxhx Mar 8 13:43:44.507: INFO: Got endpoints: latency-svc-qfnnk [750.771959ms] Mar 8 13:43:44.529: INFO: Created: latency-svc-zlg4p Mar 8 13:43:44.571: INFO: Got endpoints: latency-svc-6plr2 [764.705467ms] Mar 8 13:43:44.595: INFO: Created: latency-svc-dvxhd Mar 8 13:43:44.606: INFO: Got endpoints: latency-svc-lzf78 [742.776351ms] Mar 8 13:43:44.625: INFO: Created: latency-svc-m4v22 Mar 8 13:43:44.656: INFO: Got endpoints: latency-svc-xk6fw [750.139328ms] Mar 8 13:43:44.691: INFO: Created: latency-svc-8ftvs Mar 8 13:43:44.706: INFO: Got endpoints: latency-svc-js8lh [746.207035ms] Mar 8 13:43:44.740: INFO: Created: latency-svc-xsgg9 Mar 8 13:43:44.756: INFO: Got endpoints: latency-svc-4cckb [750.072205ms] Mar 8 13:43:44.775: INFO: Created: latency-svc-v8tvc Mar 8 13:43:44.817: INFO: Got endpoints: latency-svc-mj76j [760.027533ms] Mar 8 13:43:44.841: INFO: Created: latency-svc-f6thh Mar 8 13:43:44.856: INFO: Got endpoints: latency-svc-8cl5q [750.016541ms] Mar 8 13:43:44.889: INFO: Created: latency-svc-wm5f8 Mar 8 13:43:44.906: INFO: Got endpoints: latency-svc-6hbh6 [749.962289ms] Mar 8 13:43:44.943: INFO: Created: latency-svc-46hjm Mar 8 13:43:45.143: INFO: Got endpoints: latency-svc-czvs9 [936.42422ms] Mar 8 13:43:45.143: INFO: Got endpoints: latency-svc-kkwgc [817.665906ms] Mar 8 13:43:45.143: INFO: Got endpoints: latency-svc-2m4f7 [886.943043ms] Mar 8 13:43:45.143: INFO: Got endpoints: latency-svc-d6c2p [786.735995ms] Mar 8 13:43:45.164: INFO: Got endpoints: latency-svc-nhgrj [757.67112ms] Mar 8 13:43:45.183: INFO: Created: latency-svc-zr74j Mar 8 13:43:45.230: INFO: Got endpoints: latency-svc-sdxhx [771.437116ms] Mar 8 13:43:45.231: INFO: Created: latency-svc-b9g6n Mar 8 13:43:45.249: INFO: Created: latency-svc-kntml Mar 8 13:43:45.274: INFO: Created: latency-svc-jwdsx Mar 8 13:43:45.274: INFO: Got endpoints: latency-svc-zlg4p [766.498262ms] Mar 8 13:43:45.292: INFO: Created: latency-svc-xzvgn Mar 8 13:43:45.321: INFO: Created: latency-svc-cd544 Mar 8 13:43:45.322: INFO: Got endpoints: latency-svc-m4v22 [715.074263ms] Mar 8 13:43:45.356: INFO: Created: latency-svc-7z6k8 Mar 8 13:43:45.363: INFO: Got endpoints: latency-svc-dvxhd [792.297096ms] Mar 8 13:43:45.381: INFO: Created: latency-svc-s4hc8 Mar 8 13:43:45.399: INFO: Created: latency-svc-tvvqz Mar 8 13:43:45.411: INFO: Got endpoints: latency-svc-8ftvs [754.191836ms] Mar 8 13:43:45.428: INFO: Created: latency-svc-jftld Mar 8 13:43:45.475: INFO: Got endpoints: latency-svc-xsgg9 [768.849302ms] Mar 8 13:43:45.501: INFO: Created: latency-svc-7bg97 Mar 8 13:43:45.507: INFO: Got endpoints: latency-svc-v8tvc [750.489766ms] Mar 8 13:43:45.537: INFO: Created: latency-svc-7jn74 Mar 8 13:43:45.556: INFO: Got endpoints: latency-svc-f6thh [739.560217ms] Mar 8 13:43:45.595: INFO: Created: latency-svc-nw22k Mar 8 13:43:45.606: INFO: Got endpoints: latency-svc-wm5f8 [749.956743ms] Mar 8 13:43:45.627: INFO: Created: latency-svc-flrxf Mar 8 13:43:45.656: INFO: Got endpoints: latency-svc-46hjm [750.09316ms] Mar 8 13:43:45.674: INFO: Created: latency-svc-rtxrf Mar 8 13:43:45.709: INFO: Got endpoints: latency-svc-zr74j [565.914722ms] Mar 8 13:43:45.728: INFO: Created: latency-svc-pg2tw Mar 8 13:43:45.757: INFO: Got endpoints: latency-svc-b9g6n [613.668703ms] Mar 8 13:43:45.776: INFO: Created: latency-svc-pdc7c Mar 8 13:43:45.807: INFO: Got endpoints: latency-svc-kntml [663.397271ms] Mar 8 13:43:45.873: INFO: Created: latency-svc-7t8zg Mar 8 13:43:45.873: INFO: Got endpoints: latency-svc-jwdsx [730.226358ms] Mar 8 13:43:45.897: INFO: Created: latency-svc-pqn4g Mar 8 13:43:45.906: INFO: Got endpoints: latency-svc-xzvgn [742.162654ms] Mar 8 13:43:45.927: INFO: Created: latency-svc-46zmm Mar 8 13:43:45.972: INFO: Got endpoints: latency-svc-cd544 [742.167182ms] Mar 8 13:43:46.004: INFO: Created: latency-svc-z6zhj Mar 8 13:43:46.010: INFO: Got endpoints: latency-svc-7z6k8 [736.132142ms] Mar 8 13:43:46.034: INFO: Created: latency-svc-s4gsr Mar 8 13:43:46.056: INFO: Got endpoints: latency-svc-s4hc8 [734.866166ms] Mar 8 13:43:46.086: INFO: Created: latency-svc-4tszp Mar 8 13:43:46.107: INFO: Got endpoints: latency-svc-tvvqz [743.159066ms] Mar 8 13:43:46.143: INFO: Created: latency-svc-5nqrw Mar 8 13:43:46.156: INFO: Got endpoints: latency-svc-jftld [745.605753ms] Mar 8 13:43:46.179: INFO: Created: latency-svc-xv2qm Mar 8 13:43:46.207: INFO: Got endpoints: latency-svc-7bg97 [731.580549ms] Mar 8 13:43:46.233: INFO: Created: latency-svc-dpd7x Mar 8 13:43:46.257: INFO: Got endpoints: latency-svc-7jn74 [749.892204ms] Mar 8 13:43:46.287: INFO: Created: latency-svc-x57jw Mar 8 13:43:46.314: INFO: Got endpoints: latency-svc-nw22k [757.069718ms] Mar 8 13:43:46.334: INFO: Created: latency-svc-9tgdg Mar 8 13:43:46.356: INFO: Got endpoints: latency-svc-flrxf [750.057448ms] Mar 8 13:43:46.376: INFO: Created: latency-svc-2px8k Mar 8 13:43:46.406: INFO: Got endpoints: latency-svc-rtxrf [749.858304ms] Mar 8 13:43:46.448: INFO: Created: latency-svc-tz8jc Mar 8 13:43:46.457: INFO: Got endpoints: latency-svc-pg2tw [747.529365ms] Mar 8 13:43:46.478: INFO: Created: latency-svc-79smm Mar 8 13:43:46.506: INFO: Got endpoints: latency-svc-pdc7c [749.813051ms] Mar 8 13:43:46.526: INFO: Created: latency-svc-rshz7 Mar 8 13:43:46.557: INFO: Got endpoints: latency-svc-7t8zg [749.972293ms] Mar 8 13:43:46.599: INFO: Created: latency-svc-789wr Mar 8 13:43:46.607: INFO: Got endpoints: latency-svc-pqn4g [733.205636ms] Mar 8 13:43:46.629: INFO: Created: latency-svc-skv8v Mar 8 13:43:46.673: INFO: Got endpoints: latency-svc-46zmm [766.90336ms] Mar 8 13:43:46.694: INFO: Created: latency-svc-pnt7q Mar 8 13:43:46.706: INFO: Got endpoints: latency-svc-z6zhj [733.893729ms] Mar 8 13:43:46.741: INFO: Created: latency-svc-j5h4s Mar 8 13:43:46.756: INFO: Got endpoints: latency-svc-s4gsr [746.350093ms] Mar 8 13:43:46.805: INFO: Created: latency-svc-mmvvc Mar 8 13:43:46.806: INFO: Got endpoints: latency-svc-4tszp [749.972664ms] Mar 8 13:43:46.868: INFO: Created: latency-svc-5cxxh Mar 8 13:43:46.869: INFO: Got endpoints: latency-svc-5nqrw [762.047492ms] Mar 8 13:43:46.931: INFO: Created: latency-svc-rxbz7 Mar 8 13:43:46.931: INFO: Got endpoints: latency-svc-xv2qm [774.348508ms] Mar 8 13:43:46.964: INFO: Got endpoints: latency-svc-dpd7x [757.442319ms] Mar 8 13:43:46.964: INFO: Created: latency-svc-qklzl Mar 8 13:43:46.988: INFO: Created: latency-svc-mz8sr Mar 8 13:43:47.007: INFO: Got endpoints: latency-svc-x57jw [749.771606ms] Mar 8 13:43:47.050: INFO: Created: latency-svc-qq9l9 Mar 8 13:43:47.056: INFO: Got endpoints: latency-svc-9tgdg [742.703363ms] Mar 8 13:43:47.078: INFO: Created: latency-svc-m4n9m Mar 8 13:43:47.106: INFO: Got endpoints: latency-svc-2px8k [749.945337ms] Mar 8 13:43:47.176: INFO: Got endpoints: latency-svc-tz8jc [769.639604ms] Mar 8 13:43:47.207: INFO: Got endpoints: latency-svc-79smm [750.174178ms] Mar 8 13:43:47.256: INFO: Got endpoints: latency-svc-rshz7 [749.933555ms] Mar 8 13:43:47.306: INFO: Got endpoints: latency-svc-789wr [749.627549ms] Mar 8 13:43:47.357: INFO: Got endpoints: latency-svc-skv8v [749.918448ms] Mar 8 13:43:47.406: INFO: Got endpoints: latency-svc-pnt7q [733.080127ms] Mar 8 13:43:47.456: INFO: Got endpoints: latency-svc-j5h4s [750.087652ms] Mar 8 13:43:47.511: INFO: Got endpoints: latency-svc-mmvvc [755.049954ms] Mar 8 13:43:47.557: INFO: Got endpoints: latency-svc-5cxxh [750.610302ms] Mar 8 13:43:47.606: INFO: Got endpoints: latency-svc-rxbz7 [737.72757ms] Mar 8 13:43:47.656: INFO: Got endpoints: latency-svc-qklzl [725.729325ms] Mar 8 13:43:47.706: INFO: Got endpoints: latency-svc-mz8sr [742.039858ms] Mar 8 13:43:47.757: INFO: Got endpoints: latency-svc-qq9l9 [750.075935ms] Mar 8 13:43:47.806: INFO: Got endpoints: latency-svc-m4n9m [750.081225ms] Mar 8 13:43:47.806: INFO: Latencies: [39.885514ms 89.637137ms 110.732485ms 141.421118ms 152.862785ms 171.498984ms 225.575887ms 230.289262ms 254.469064ms 271.802398ms 290.082185ms 351.934823ms 376.274387ms 392.431376ms 393.193935ms 393.861495ms 394.114375ms 398.685354ms 403.298386ms 405.153349ms 405.306384ms 407.330873ms 407.355308ms 407.936986ms 408.900918ms 411.089312ms 411.175381ms 416.077322ms 417.789801ms 417.80537ms 418.023797ms 418.812835ms 418.94672ms 420.157118ms 422.120172ms 423.085709ms 424.783277ms 425.43299ms 426.610482ms 427.940126ms 429.219977ms 430.550837ms 430.852425ms 430.896775ms 431.498908ms 431.568866ms 431.907625ms 433.447537ms 434.460369ms 437.125763ms 437.736469ms 437.94013ms 442.195755ms 443.360032ms 443.360117ms 443.687023ms 444.810695ms 445.264817ms 445.473341ms 449.795104ms 451.85171ms 452.906634ms 453.097401ms 453.591761ms 455.263926ms 458.417188ms 465.65968ms 467.651581ms 473.454727ms 473.587911ms 504.420469ms 520.789153ms 546.639583ms 548.731926ms 565.914722ms 584.218714ms 602.094399ms 613.668703ms 647.93031ms 650.746943ms 663.397271ms 668.987534ms 697.846578ms 708.498095ms 715.074263ms 725.729325ms 730.226358ms 730.283548ms 730.886294ms 730.96446ms 731.580549ms 731.893147ms 733.080127ms 733.205636ms 733.893729ms 734.866166ms 735.383031ms 736.132142ms 736.233256ms 736.854699ms 737.056593ms 737.72757ms 739.40874ms 739.560217ms 740.585477ms 740.819431ms 742.039858ms 742.162654ms 742.167182ms 742.218244ms 742.549321ms 742.681127ms 742.703363ms 742.776351ms 743.134941ms 743.159066ms 743.562224ms 743.726481ms 745.450741ms 745.605753ms 746.207035ms 746.350093ms 746.469154ms 746.705628ms 746.719907ms 747.529365ms 747.753839ms 749.130022ms 749.327033ms 749.373317ms 749.53359ms 749.601657ms 749.627549ms 749.717549ms 749.763937ms 749.771606ms 749.813051ms 749.858304ms 749.892204ms 749.918448ms 749.933555ms 749.945337ms 749.948586ms 749.952647ms 749.956743ms 749.962289ms 749.972293ms 749.972664ms 750.016541ms 750.057448ms 750.070664ms 750.072205ms 750.075935ms 750.081225ms 750.082491ms 750.087652ms 750.09316ms 750.112152ms 750.120349ms 750.139328ms 750.141444ms 750.174178ms 750.194403ms 750.316405ms 750.413764ms 750.479767ms 750.489766ms 750.508504ms 750.610302ms 750.771959ms 751.738182ms 752.749884ms 752.803851ms 752.853578ms 752.905985ms 753.727776ms 753.870185ms 754.191836ms 754.255921ms 755.049954ms 757.069718ms 757.442319ms 757.67112ms 758.039481ms 760.027533ms 762.047492ms 763.552583ms 764.705467ms 766.498262ms 766.90336ms 768.849302ms 768.97438ms 769.639604ms 771.437116ms 774.348508ms 786.735995ms 792.297096ms 817.665906ms 886.943043ms 936.42422ms] Mar 8 13:43:47.806: INFO: 50 %ile: 737.056593ms Mar 8 13:43:47.806: INFO: 90 %ile: 757.069718ms Mar 8 13:43:47.807: INFO: 99 %ile: 886.943043ms Mar 8 13:43:47.807: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:43:47.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8584" for this suite. • [SLOW TEST:10.809 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":118,"skipped":2103,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:43:47.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-860cab6e-3465-49c7-9d3f-10b88d7e8fef STEP: Creating a pod to test consume configMaps Mar 8 13:43:47.875: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8ff56f7e-b14f-40ee-9e53-15a96f1e21b7" in namespace "projected-5671" to be "success or failure" Mar 8 13:43:47.892: INFO: Pod "pod-projected-configmaps-8ff56f7e-b14f-40ee-9e53-15a96f1e21b7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.287817ms Mar 8 13:43:49.896: INFO: Pod "pod-projected-configmaps-8ff56f7e-b14f-40ee-9e53-15a96f1e21b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021126365s STEP: Saw pod success Mar 8 13:43:49.896: INFO: Pod "pod-projected-configmaps-8ff56f7e-b14f-40ee-9e53-15a96f1e21b7" satisfied condition "success or failure" Mar 8 13:43:49.899: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-8ff56f7e-b14f-40ee-9e53-15a96f1e21b7 container projected-configmap-volume-test: STEP: delete the pod Mar 8 13:43:49.918: INFO: Waiting for pod pod-projected-configmaps-8ff56f7e-b14f-40ee-9e53-15a96f1e21b7 to disappear Mar 8 13:43:49.922: INFO: Pod pod-projected-configmaps-8ff56f7e-b14f-40ee-9e53-15a96f1e21b7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:43:49.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5671" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:43:49.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 13:43:50.006: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:50.021: INFO: Number of nodes with available pods: 0 Mar 8 13:43:50.021: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:51.025: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:51.028: INFO: Number of nodes with available pods: 0 Mar 8 13:43:51.028: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:52.025: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:52.029: INFO: Number of nodes with available pods: 1 Mar 8 13:43:52.029: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:53.027: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:53.035: INFO: Number of nodes with available pods: 2 Mar 8 13:43:53.035: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 8 13:43:53.059: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:53.065: INFO: Number of nodes with available pods: 1 Mar 8 13:43:53.065: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:54.073: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:54.078: INFO: Number of nodes with available pods: 1 Mar 8 13:43:54.078: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:55.073: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:55.098: INFO: Number of nodes with available pods: 1 Mar 8 13:43:55.098: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:56.068: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:56.073: INFO: Number of nodes with available pods: 1 Mar 8 13:43:56.073: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:57.073: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:57.123: INFO: Number of nodes with available pods: 1 Mar 8 13:43:57.123: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:58.074: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:58.094: INFO: Number of nodes with available pods: 1 Mar 8 13:43:58.094: INFO: Node kind-worker is running more than one daemon pod Mar 8 13:43:59.069: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 13:43:59.111: INFO: Number of nodes with available pods: 2 Mar 8 13:43:59.111: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7920, will wait for the garbage collector to delete the pods Mar 8 13:43:59.207: INFO: Deleting DaemonSet.extensions daemon-set took: 18.677163ms Mar 8 13:43:59.607: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.193497ms Mar 8 13:44:02.332: INFO: Number of nodes with available pods: 0 Mar 8 13:44:02.332: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 13:44:02.339: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7920/daemonsets","resourceVersion":"17991"},"items":null} Mar 8 13:44:02.345: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7920/pods","resourceVersion":"17992"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:44:02.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7920" for this suite. • [SLOW TEST:12.530 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":120,"skipped":2137,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:44:02.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-9960 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9960 STEP: Deleting pre-stop pod Mar 8 13:44:11.617: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:44:11.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9960" for this suite. • [SLOW TEST:9.164 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":121,"skipped":2141,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:44:11.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:44:11.730: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e4615d3f-cd16-4b7d-8271-03aa92894297" in namespace "security-context-test-3386" to be "success or failure" Mar 8 13:44:11.736: INFO: Pod "alpine-nnp-false-e4615d3f-cd16-4b7d-8271-03aa92894297": Phase="Pending", Reason="", readiness=false. Elapsed: 5.871305ms Mar 8 13:44:13.758: INFO: Pod "alpine-nnp-false-e4615d3f-cd16-4b7d-8271-03aa92894297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027755141s Mar 8 13:44:15.761: INFO: Pod "alpine-nnp-false-e4615d3f-cd16-4b7d-8271-03aa92894297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031274536s Mar 8 13:44:15.761: INFO: Pod "alpine-nnp-false-e4615d3f-cd16-4b7d-8271-03aa92894297" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:44:15.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3386" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2160,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:44:15.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:44:17.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-126" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2163,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:44:17.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 8 13:44:18.543: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 8 13:44:20.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271858, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271858, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271858, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719271858, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:44:23.585: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:44:23.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:44:24.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3033" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.509 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":124,"skipped":2163,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:44:24.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 8 13:44:24.443: INFO: >>> kubeConfig: /root/.kube/config Mar 8 13:44:27.318: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:44:38.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9102" for this suite. • [SLOW TEST:14.127 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":125,"skipped":2164,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:44:38.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-wsc6 STEP: Creating a pod to test atomic-volume-subpath Mar 8 13:44:38.606: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wsc6" in namespace "subpath-2503" to be "success or failure" Mar 8 13:44:38.623: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.309912ms Mar 8 13:44:40.626: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 2.019576249s Mar 8 13:44:42.629: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 4.023081907s Mar 8 13:44:44.633: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 6.026529403s Mar 8 13:44:46.637: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 8.030371346s Mar 8 13:44:48.640: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 10.034174285s Mar 8 13:44:50.644: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 12.038154748s Mar 8 13:44:52.648: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 14.041753448s Mar 8 13:44:54.652: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 16.045653092s Mar 8 13:44:56.656: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 18.049276292s Mar 8 13:44:58.659: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Running", Reason="", readiness=true. Elapsed: 20.052710079s Mar 8 13:45:00.663: INFO: Pod "pod-subpath-test-configmap-wsc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056412677s STEP: Saw pod success Mar 8 13:45:00.663: INFO: Pod "pod-subpath-test-configmap-wsc6" satisfied condition "success or failure" Mar 8 13:45:00.665: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-wsc6 container test-container-subpath-configmap-wsc6: STEP: delete the pod Mar 8 13:45:00.713: INFO: Waiting for pod pod-subpath-test-configmap-wsc6 to disappear Mar 8 13:45:00.741: INFO: Pod pod-subpath-test-configmap-wsc6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-wsc6 Mar 8 13:45:00.741: INFO: Deleting pod "pod-subpath-test-configmap-wsc6" in namespace "subpath-2503" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:45:00.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2503" for this suite. • [SLOW TEST:22.238 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":126,"skipped":2164,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:45:00.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 8 13:45:00.812: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7115 /api/v1/namespaces/watch-7115/configmaps/e2e-watch-test-watch-closed 89b124d4-ddd8-4315-a21b-cb7b83891ea9 18447 0 2020-03-08 13:45:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 13:45:00.812: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7115 /api/v1/namespaces/watch-7115/configmaps/e2e-watch-test-watch-closed 89b124d4-ddd8-4315-a21b-cb7b83891ea9 18448 0 2020-03-08 13:45:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 8 13:45:00.823: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7115 /api/v1/namespaces/watch-7115/configmaps/e2e-watch-test-watch-closed 89b124d4-ddd8-4315-a21b-cb7b83891ea9 18449 0 2020-03-08 13:45:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 13:45:00.823: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7115 /api/v1/namespaces/watch-7115/configmaps/e2e-watch-test-watch-closed 89b124d4-ddd8-4315-a21b-cb7b83891ea9 18450 0 2020-03-08 13:45:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:45:00.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7115" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":127,"skipped":2165,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:45:00.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2051/configmap-test-1963a341-e5c6-4349-8f55-33d17f0060a9 STEP: Creating a pod to test consume configMaps Mar 8 13:45:00.938: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca2750bb-fbbf-4fb3-a135-5d27219cea80" in namespace "configmap-2051" to be "success or failure" Mar 8 13:45:00.941: INFO: Pod "pod-configmaps-ca2750bb-fbbf-4fb3-a135-5d27219cea80": Phase="Pending", Reason="", readiness=false. Elapsed: 3.52041ms Mar 8 13:45:02.945: INFO: Pod "pod-configmaps-ca2750bb-fbbf-4fb3-a135-5d27219cea80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007003736s STEP: Saw pod success Mar 8 13:45:02.945: INFO: Pod "pod-configmaps-ca2750bb-fbbf-4fb3-a135-5d27219cea80" satisfied condition "success or failure" Mar 8 13:45:02.947: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-ca2750bb-fbbf-4fb3-a135-5d27219cea80 container env-test: STEP: delete the pod Mar 8 13:45:02.966: INFO: Waiting for pod pod-configmaps-ca2750bb-fbbf-4fb3-a135-5d27219cea80 to disappear Mar 8 13:45:02.970: INFO: Pod pod-configmaps-ca2750bb-fbbf-4fb3-a135-5d27219cea80 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:45:02.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2051" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2175,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:45:02.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d6b21497-3a80-4268-83b3-225c656a2e1e STEP: Creating a pod to test consume secrets Mar 8 13:45:03.027: INFO: Waiting up to 5m0s for pod "pod-secrets-7b2988ef-cec2-45be-8840-819612705507" in namespace "secrets-3937" to be "success or failure" Mar 8 13:45:03.045: INFO: Pod "pod-secrets-7b2988ef-cec2-45be-8840-819612705507": Phase="Pending", Reason="", readiness=false. Elapsed: 18.280111ms Mar 8 13:45:05.048: INFO: Pod "pod-secrets-7b2988ef-cec2-45be-8840-819612705507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021926746s STEP: Saw pod success Mar 8 13:45:05.049: INFO: Pod "pod-secrets-7b2988ef-cec2-45be-8840-819612705507" satisfied condition "success or failure" Mar 8 13:45:05.051: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-7b2988ef-cec2-45be-8840-819612705507 container secret-env-test: STEP: delete the pod Mar 8 13:45:05.085: INFO: Waiting for pod pod-secrets-7b2988ef-cec2-45be-8840-819612705507 to disappear Mar 8 13:45:05.091: INFO: Pod pod-secrets-7b2988ef-cec2-45be-8840-819612705507 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:45:05.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3937" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2191,"failed":0} ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:45:05.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-e6adc272-fc3b-4ea2-8776-d5fd4b20fb61 in namespace container-probe-3159 Mar 8 13:45:07.173: INFO: Started pod busybox-e6adc272-fc3b-4ea2-8776-d5fd4b20fb61 in namespace container-probe-3159 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 13:45:07.177: INFO: Initial restart count of pod busybox-e6adc272-fc3b-4ea2-8776-d5fd4b20fb61 is 0 Mar 8 13:45:57.270: INFO: Restart count of pod container-probe-3159/busybox-e6adc272-fc3b-4ea2-8776-d5fd4b20fb61 is now 1 (50.093767932s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:45:57.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3159" for this suite. • [SLOW TEST:52.247 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:45:57.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 13:46:07.524537 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 13:46:07.524: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:46:07.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1153" for this suite. • [SLOW TEST:10.185 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":131,"skipped":2230,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:46:07.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-16f10b8f-96c3-4c14-9dab-e1ba1f1a1856 STEP: Creating a pod to test consume secrets Mar 8 13:46:07.606: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-57dad749-501e-4366-917a-23e20965fedf" in namespace "projected-416" to be "success or failure" Mar 8 13:46:07.613: INFO: Pod "pod-projected-secrets-57dad749-501e-4366-917a-23e20965fedf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.213742ms Mar 8 13:46:09.617: INFO: Pod "pod-projected-secrets-57dad749-501e-4366-917a-23e20965fedf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010691577s STEP: Saw pod success Mar 8 13:46:09.617: INFO: Pod "pod-projected-secrets-57dad749-501e-4366-917a-23e20965fedf" satisfied condition "success or failure" Mar 8 13:46:09.619: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-57dad749-501e-4366-917a-23e20965fedf container projected-secret-volume-test: STEP: delete the pod Mar 8 13:46:09.645: INFO: Waiting for pod pod-projected-secrets-57dad749-501e-4366-917a-23e20965fedf to disappear Mar 8 13:46:09.649: INFO: Pod pod-projected-secrets-57dad749-501e-4366-917a-23e20965fedf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:46:09.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-416" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2248,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:46:09.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1782 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 13:46:09.700: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 13:46:27.829: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.123:8080/dial?request=hostname&protocol=udp&host=10.244.2.120&port=8081&tries=1'] Namespace:pod-network-test-1782 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:46:27.830: INFO: >>> kubeConfig: /root/.kube/config I0308 13:46:27.866780 6 log.go:172] (0xc002c786e0) (0xc002836f00) Create stream I0308 13:46:27.866817 6 log.go:172] (0xc002c786e0) (0xc002836f00) Stream added, broadcasting: 1 I0308 13:46:27.871406 6 log.go:172] (0xc002c786e0) Reply frame received for 1 I0308 13:46:27.871446 6 log.go:172] (0xc002c786e0) (0xc000ba8460) Create stream I0308 13:46:27.871460 6 log.go:172] (0xc002c786e0) (0xc000ba8460) Stream added, broadcasting: 3 I0308 13:46:27.872667 6 log.go:172] (0xc002c786e0) Reply frame received for 3 I0308 13:46:27.872705 6 log.go:172] (0xc002c786e0) (0xc000ba8960) Create stream I0308 13:46:27.872722 6 log.go:172] (0xc002c786e0) (0xc000ba8960) Stream added, broadcasting: 5 I0308 13:46:27.873796 6 log.go:172] (0xc002c786e0) Reply frame received for 5 I0308 13:46:27.940630 6 log.go:172] (0xc002c786e0) Data frame received for 3 I0308 13:46:27.940647 6 log.go:172] (0xc000ba8460) (3) Data frame handling I0308 13:46:27.940662 6 log.go:172] (0xc000ba8460) (3) Data frame sent I0308 13:46:27.941494 6 log.go:172] (0xc002c786e0) Data frame received for 3 I0308 13:46:27.941519 6 log.go:172] (0xc000ba8460) (3) Data frame handling I0308 13:46:27.941680 6 log.go:172] (0xc002c786e0) Data frame received for 5 I0308 13:46:27.941698 6 log.go:172] (0xc000ba8960) (5) Data frame handling I0308 13:46:27.943439 6 log.go:172] (0xc002c786e0) Data frame received for 1 I0308 13:46:27.943460 6 log.go:172] (0xc002836f00) (1) Data frame handling I0308 13:46:27.943473 6 log.go:172] (0xc002836f00) (1) Data frame sent I0308 13:46:27.943485 6 log.go:172] (0xc002c786e0) (0xc002836f00) Stream removed, broadcasting: 1 I0308 13:46:27.943559 6 log.go:172] (0xc002c786e0) (0xc002836f00) Stream removed, broadcasting: 1 I0308 13:46:27.943572 6 log.go:172] (0xc002c786e0) (0xc000ba8460) Stream removed, broadcasting: 3 I0308 13:46:27.943679 6 log.go:172] (0xc002c786e0) (0xc000ba8960) Stream removed, broadcasting: 5 I0308 13:46:27.943825 6 log.go:172] (0xc002c786e0) Go away received Mar 8 13:46:27.943: INFO: Waiting for responses: map[] Mar 8 13:46:27.946: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.123:8080/dial?request=hostname&protocol=udp&host=10.244.1.122&port=8081&tries=1'] Namespace:pod-network-test-1782 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:46:27.946: INFO: >>> kubeConfig: /root/.kube/config I0308 13:46:27.973880 6 log.go:172] (0xc005670370) (0xc001fe6d20) Create stream I0308 13:46:27.973911 6 log.go:172] (0xc005670370) (0xc001fe6d20) Stream added, broadcasting: 1 I0308 13:46:27.976075 6 log.go:172] (0xc005670370) Reply frame received for 1 I0308 13:46:27.976115 6 log.go:172] (0xc005670370) (0xc002837040) Create stream I0308 13:46:27.976126 6 log.go:172] (0xc005670370) (0xc002837040) Stream added, broadcasting: 3 I0308 13:46:27.977109 6 log.go:172] (0xc005670370) Reply frame received for 3 I0308 13:46:27.977142 6 log.go:172] (0xc005670370) (0xc0028370e0) Create stream I0308 13:46:27.977152 6 log.go:172] (0xc005670370) (0xc0028370e0) Stream added, broadcasting: 5 I0308 13:46:27.978099 6 log.go:172] (0xc005670370) Reply frame received for 5 I0308 13:46:28.042325 6 log.go:172] (0xc005670370) Data frame received for 3 I0308 13:46:28.042346 6 log.go:172] (0xc002837040) (3) Data frame handling I0308 13:46:28.042363 6 log.go:172] (0xc002837040) (3) Data frame sent I0308 13:46:28.042979 6 log.go:172] (0xc005670370) Data frame received for 3 I0308 13:46:28.043035 6 log.go:172] (0xc002837040) (3) Data frame handling I0308 13:46:28.043066 6 log.go:172] (0xc005670370) Data frame received for 5 I0308 13:46:28.043085 6 log.go:172] (0xc0028370e0) (5) Data frame handling I0308 13:46:28.045467 6 log.go:172] (0xc005670370) Data frame received for 1 I0308 13:46:28.045488 6 log.go:172] (0xc001fe6d20) (1) Data frame handling I0308 13:46:28.045505 6 log.go:172] (0xc001fe6d20) (1) Data frame sent I0308 13:46:28.045522 6 log.go:172] (0xc005670370) (0xc001fe6d20) Stream removed, broadcasting: 1 I0308 13:46:28.045542 6 log.go:172] (0xc005670370) Go away received I0308 13:46:28.045644 6 log.go:172] (0xc005670370) (0xc001fe6d20) Stream removed, broadcasting: 1 I0308 13:46:28.045667 6 log.go:172] (0xc005670370) (0xc002837040) Stream removed, broadcasting: 3 I0308 13:46:28.045678 6 log.go:172] (0xc005670370) (0xc0028370e0) Stream removed, broadcasting: 5 Mar 8 13:46:28.045: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:46:28.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1782" for this suite. • [SLOW TEST:18.398 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2249,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:46:28.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:46:28.114: INFO: Creating deployment "test-recreate-deployment" Mar 8 13:46:28.117: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 8 13:46:28.139: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 8 13:46:30.145: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 8 13:46:30.147: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 8 13:46:30.153: INFO: Updating deployment test-recreate-deployment Mar 8 13:46:30.153: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 13:46:30.301: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3931 /apis/apps/v1/namespaces/deployment-3931/deployments/test-recreate-deployment 3b7439b6-f8de-45b8-9698-83906cf47969 19134 2 2020-03-08 13:46:28 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056e2538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 13:46:30 +0000 UTC,LastTransitionTime:2020-03-08 13:46:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-08 13:46:30 +0000 UTC,LastTransitionTime:2020-03-08 13:46:28 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 8 13:46:30.341: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3931 /apis/apps/v1/namespaces/deployment-3931/replicasets/test-recreate-deployment-5f94c574ff 61ead503-9d8f-4b40-bed9-7e241ef8adfb 19132 1 2020-03-08 13:46:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3b7439b6-f8de-45b8-9698-83906cf47969 0xc0056e28f7 0xc0056e28f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056e2958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 13:46:30.341: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 8 13:46:30.341: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3931 /apis/apps/v1/namespaces/deployment-3931/replicasets/test-recreate-deployment-799c574856 2e07ab63-fb2a-4b98-adb8-1afeab828e35 19123 2 2020-03-08 13:46:28 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3b7439b6-f8de-45b8-9698-83906cf47969 0xc0056e29c7 0xc0056e29c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056e2a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 13:46:30.344: INFO: Pod "test-recreate-deployment-5f94c574ff-f6fvl" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-f6fvl test-recreate-deployment-5f94c574ff- deployment-3931 /api/v1/namespaces/deployment-3931/pods/test-recreate-deployment-5f94c574ff-f6fvl 7b4889a0-f75a-4aea-9469-5321cd255908 19135 0 2020-03-08 13:46:30 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 61ead503-9d8f-4b40-bed9-7e241ef8adfb 0xc0056e2eb7 0xc0056e2eb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cbb5f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cbb5f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cbb5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:46:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:46:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:46:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:46:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-08 13:46:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:46:30.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3931" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":134,"skipped":2257,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:46:30.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2184 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 13:46:30.394: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 13:46:54.469: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.122:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2184 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:46:54.469: INFO: >>> kubeConfig: /root/.kube/config I0308 13:46:54.504997 6 log.go:172] (0xc002c242c0) (0xc000e58320) Create stream I0308 13:46:54.505032 6 log.go:172] (0xc002c242c0) (0xc000e58320) Stream added, broadcasting: 1 I0308 13:46:54.507144 6 log.go:172] (0xc002c242c0) Reply frame received for 1 I0308 13:46:54.507183 6 log.go:172] (0xc002c242c0) (0xc000e585a0) Create stream I0308 13:46:54.507197 6 log.go:172] (0xc002c242c0) (0xc000e585a0) Stream added, broadcasting: 3 I0308 13:46:54.508176 6 log.go:172] (0xc002c242c0) Reply frame received for 3 I0308 13:46:54.508206 6 log.go:172] (0xc002c242c0) (0xc001fe7b80) Create stream I0308 13:46:54.508216 6 log.go:172] (0xc002c242c0) (0xc001fe7b80) Stream added, broadcasting: 5 I0308 13:46:54.508933 6 log.go:172] (0xc002c242c0) Reply frame received for 5 I0308 13:46:54.586168 6 log.go:172] (0xc002c242c0) Data frame received for 3 I0308 13:46:54.586203 6 log.go:172] (0xc000e585a0) (3) Data frame handling I0308 13:46:54.586227 6 log.go:172] (0xc000e585a0) (3) Data frame sent I0308 13:46:54.586414 6 log.go:172] (0xc002c242c0) Data frame received for 3 I0308 13:46:54.586437 6 log.go:172] (0xc000e585a0) (3) Data frame handling I0308 13:46:54.586527 6 log.go:172] (0xc002c242c0) Data frame received for 5 I0308 13:46:54.586545 6 log.go:172] (0xc001fe7b80) (5) Data frame handling I0308 13:46:54.588360 6 log.go:172] (0xc002c242c0) Data frame received for 1 I0308 13:46:54.588383 6 log.go:172] (0xc000e58320) (1) Data frame handling I0308 13:46:54.588398 6 log.go:172] (0xc000e58320) (1) Data frame sent I0308 13:46:54.588425 6 log.go:172] (0xc002c242c0) (0xc000e58320) Stream removed, broadcasting: 1 I0308 13:46:54.588441 6 log.go:172] (0xc002c242c0) Go away received I0308 13:46:54.588525 6 log.go:172] (0xc002c242c0) (0xc000e58320) Stream removed, broadcasting: 1 I0308 13:46:54.588544 6 log.go:172] (0xc002c242c0) (0xc000e585a0) Stream removed, broadcasting: 3 I0308 13:46:54.588555 6 log.go:172] (0xc002c242c0) (0xc001fe7b80) Stream removed, broadcasting: 5 Mar 8 13:46:54.588: INFO: Found all expected endpoints: [netserver-0] Mar 8 13:46:54.598: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.125:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2184 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:46:54.598: INFO: >>> kubeConfig: /root/.kube/config I0308 13:46:54.624605 6 log.go:172] (0xc005670a50) (0xc001ec40a0) Create stream I0308 13:46:54.624626 6 log.go:172] (0xc005670a50) (0xc001ec40a0) Stream added, broadcasting: 1 I0308 13:46:54.627887 6 log.go:172] (0xc005670a50) Reply frame received for 1 I0308 13:46:54.627951 6 log.go:172] (0xc005670a50) (0xc001ec4140) Create stream I0308 13:46:54.627978 6 log.go:172] (0xc005670a50) (0xc001ec4140) Stream added, broadcasting: 3 I0308 13:46:54.630755 6 log.go:172] (0xc005670a50) Reply frame received for 3 I0308 13:46:54.630782 6 log.go:172] (0xc005670a50) (0xc000dff9a0) Create stream I0308 13:46:54.630791 6 log.go:172] (0xc005670a50) (0xc000dff9a0) Stream added, broadcasting: 5 I0308 13:46:54.631627 6 log.go:172] (0xc005670a50) Reply frame received for 5 I0308 13:46:54.711817 6 log.go:172] (0xc005670a50) Data frame received for 3 I0308 13:46:54.711846 6 log.go:172] (0xc001ec4140) (3) Data frame handling I0308 13:46:54.711865 6 log.go:172] (0xc001ec4140) (3) Data frame sent I0308 13:46:54.711883 6 log.go:172] (0xc005670a50) Data frame received for 3 I0308 13:46:54.711895 6 log.go:172] (0xc001ec4140) (3) Data frame handling I0308 13:46:54.712248 6 log.go:172] (0xc005670a50) Data frame received for 5 I0308 13:46:54.712288 6 log.go:172] (0xc000dff9a0) (5) Data frame handling I0308 13:46:54.713668 6 log.go:172] (0xc005670a50) Data frame received for 1 I0308 13:46:54.713689 6 log.go:172] (0xc001ec40a0) (1) Data frame handling I0308 13:46:54.713703 6 log.go:172] (0xc001ec40a0) (1) Data frame sent I0308 13:46:54.713951 6 log.go:172] (0xc005670a50) (0xc001ec40a0) Stream removed, broadcasting: 1 I0308 13:46:54.714067 6 log.go:172] (0xc005670a50) (0xc001ec40a0) Stream removed, broadcasting: 1 I0308 13:46:54.714092 6 log.go:172] (0xc005670a50) (0xc001ec4140) Stream removed, broadcasting: 3 I0308 13:46:54.714148 6 log.go:172] (0xc005670a50) (0xc000dff9a0) Stream removed, broadcasting: 5 Mar 8 13:46:54.714: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:46:54.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0308 13:46:54.714367 6 log.go:172] (0xc005670a50) Go away received STEP: Destroying namespace "pod-network-test-2184" for this suite. • [SLOW TEST:24.370 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2271,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:46:54.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6010 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 13:46:54.765: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 13:47:12.912: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.124 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6010 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:47:12.912: INFO: >>> kubeConfig: /root/.kube/config I0308 13:47:12.945582 6 log.go:172] (0xc002c78fd0) (0xc00113be00) Create stream I0308 13:47:12.945618 6 log.go:172] (0xc002c78fd0) (0xc00113be00) Stream added, broadcasting: 1 I0308 13:47:12.947872 6 log.go:172] (0xc002c78fd0) Reply frame received for 1 I0308 13:47:12.947911 6 log.go:172] (0xc002c78fd0) (0xc000e59900) Create stream I0308 13:47:12.947922 6 log.go:172] (0xc002c78fd0) (0xc000e59900) Stream added, broadcasting: 3 I0308 13:47:12.948889 6 log.go:172] (0xc002c78fd0) Reply frame received for 3 I0308 13:47:12.948926 6 log.go:172] (0xc002c78fd0) (0xc001ec41e0) Create stream I0308 13:47:12.948939 6 log.go:172] (0xc002c78fd0) (0xc001ec41e0) Stream added, broadcasting: 5 I0308 13:47:12.949843 6 log.go:172] (0xc002c78fd0) Reply frame received for 5 I0308 13:47:14.015086 6 log.go:172] (0xc002c78fd0) Data frame received for 3 I0308 13:47:14.015122 6 log.go:172] (0xc000e59900) (3) Data frame handling I0308 13:47:14.015135 6 log.go:172] (0xc000e59900) (3) Data frame sent I0308 13:47:14.015146 6 log.go:172] (0xc002c78fd0) Data frame received for 5 I0308 13:47:14.015158 6 log.go:172] (0xc001ec41e0) (5) Data frame handling I0308 13:47:14.015542 6 log.go:172] (0xc002c78fd0) Data frame received for 3 I0308 13:47:14.015564 6 log.go:172] (0xc000e59900) (3) Data frame handling I0308 13:47:14.017570 6 log.go:172] (0xc002c78fd0) Data frame received for 1 I0308 13:47:14.017609 6 log.go:172] (0xc00113be00) (1) Data frame handling I0308 13:47:14.017629 6 log.go:172] (0xc00113be00) (1) Data frame sent I0308 13:47:14.017659 6 log.go:172] (0xc002c78fd0) (0xc00113be00) Stream removed, broadcasting: 1 I0308 13:47:14.017757 6 log.go:172] (0xc002c78fd0) (0xc00113be00) Stream removed, broadcasting: 1 I0308 13:47:14.017776 6 log.go:172] (0xc002c78fd0) (0xc000e59900) Stream removed, broadcasting: 3 I0308 13:47:14.017792 6 log.go:172] (0xc002c78fd0) (0xc001ec41e0) Stream removed, broadcasting: 5 Mar 8 13:47:14.017: INFO: Found all expected endpoints: [netserver-0] I0308 13:47:14.017878 6 log.go:172] (0xc002c78fd0) Go away received Mar 8 13:47:14.020: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.126 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6010 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 13:47:14.020: INFO: >>> kubeConfig: /root/.kube/config I0308 13:47:14.068997 6 log.go:172] (0xc002c24840) (0xc001598280) Create stream I0308 13:47:14.069023 6 log.go:172] (0xc002c24840) (0xc001598280) Stream added, broadcasting: 1 I0308 13:47:14.071169 6 log.go:172] (0xc002c24840) Reply frame received for 1 I0308 13:47:14.071217 6 log.go:172] (0xc002c24840) (0xc001ec4280) Create stream I0308 13:47:14.071235 6 log.go:172] (0xc002c24840) (0xc001ec4280) Stream added, broadcasting: 3 I0308 13:47:14.072199 6 log.go:172] (0xc002c24840) Reply frame received for 3 I0308 13:47:14.072244 6 log.go:172] (0xc002c24840) (0xc0007f88c0) Create stream I0308 13:47:14.072258 6 log.go:172] (0xc002c24840) (0xc0007f88c0) Stream added, broadcasting: 5 I0308 13:47:14.073220 6 log.go:172] (0xc002c24840) Reply frame received for 5 I0308 13:47:15.137604 6 log.go:172] (0xc002c24840) Data frame received for 3 I0308 13:47:15.137636 6 log.go:172] (0xc001ec4280) (3) Data frame handling I0308 13:47:15.137650 6 log.go:172] (0xc001ec4280) (3) Data frame sent I0308 13:47:15.137669 6 log.go:172] (0xc002c24840) Data frame received for 3 I0308 13:47:15.137684 6 log.go:172] (0xc001ec4280) (3) Data frame handling I0308 13:47:15.137880 6 log.go:172] (0xc002c24840) Data frame received for 5 I0308 13:47:15.137910 6 log.go:172] (0xc0007f88c0) (5) Data frame handling I0308 13:47:15.140226 6 log.go:172] (0xc002c24840) Data frame received for 1 I0308 13:47:15.140250 6 log.go:172] (0xc001598280) (1) Data frame handling I0308 13:47:15.140280 6 log.go:172] (0xc001598280) (1) Data frame sent I0308 13:47:15.140564 6 log.go:172] (0xc002c24840) (0xc001598280) Stream removed, broadcasting: 1 I0308 13:47:15.140590 6 log.go:172] (0xc002c24840) Go away received I0308 13:47:15.140717 6 log.go:172] (0xc002c24840) (0xc001598280) Stream removed, broadcasting: 1 I0308 13:47:15.140746 6 log.go:172] (0xc002c24840) (0xc001ec4280) Stream removed, broadcasting: 3 I0308 13:47:15.140763 6 log.go:172] (0xc002c24840) (0xc0007f88c0) Stream removed, broadcasting: 5 Mar 8 13:47:15.140: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:47:15.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6010" for this suite. • [SLOW TEST:20.426 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2276,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:47:15.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 13:47:15.200: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 13:47:15.211: INFO: Waiting for terminating namespaces to be deleted... Mar 8 13:47:15.213: INFO: Logging pods the kubelet thinks is on node kind-worker before test Mar 8 13:47:15.220: INFO: test-container-pod from pod-network-test-6010 started at 2020-03-08 13:47:10 +0000 UTC (1 container statuses recorded) Mar 8 13:47:15.220: INFO: Container webserver ready: true, restart count 0 Mar 8 13:47:15.220: INFO: netserver-0 from pod-network-test-6010 started at 2020-03-08 13:46:54 +0000 UTC (1 container statuses recorded) Mar 8 13:47:15.220: INFO: Container webserver ready: true, restart count 0 Mar 8 13:47:15.220: INFO: kindnet-p9whg from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:47:15.220: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 13:47:15.220: INFO: kube-proxy-pz8tf from kube-system started at 2020-03-08 12:58:54 +0000 UTC (1 container statuses recorded) Mar 8 13:47:15.220: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 13:47:15.220: INFO: Logging pods the kubelet thinks is on node kind-worker2 before test Mar 8 13:47:15.236: INFO: kube-proxy-vfcnx from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:47:15.236: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 13:47:15.236: INFO: host-test-container-pod from pod-network-test-6010 started at 2020-03-08 13:47:10 +0000 UTC (1 container statuses recorded) Mar 8 13:47:15.236: INFO: Container agnhost ready: true, restart count 0 Mar 8 13:47:15.236: INFO: netserver-1 from pod-network-test-6010 started at 2020-03-08 13:46:54 +0000 UTC (1 container statuses recorded) Mar 8 13:47:15.236: INFO: Container webserver ready: true, restart count 0 Mar 8 13:47:15.236: INFO: kindnet-mjfxb from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:47:15.236: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0a4b3a8a-804e-4bce-ba4b-f0f99b780b1f 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-0a4b3a8a-804e-4bce-ba4b-f0f99b780b1f off the node kind-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0a4b3a8a-804e-4bce-ba4b-f0f99b780b1f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:52:19.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9715" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:304.247 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":137,"skipped":2298,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:52:19.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:52:19.452: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:52:21.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3792" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:52:21.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 8 13:52:24.215: INFO: Successfully updated pod "labelsupdate1dedc7ec-35f3-4131-adb4-6e6ad9fa7994" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:52:28.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8004" for this suite. • [SLOW TEST:6.668 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:52:28.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 8 13:52:28.301: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:52:44.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6254" for this suite. • [SLOW TEST:16.405 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":140,"skipped":2372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:52:44.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 8 13:52:44.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 8 13:52:44.834: INFO: stderr: "" Mar 8 13:52:44.834: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:52:44.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2666" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":141,"skipped":2397,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:52:44.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:52:51.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9584" for this suite. STEP: Destroying namespace "nsdeletetest-57" for this suite. Mar 8 13:52:51.047: INFO: Namespace nsdeletetest-57 was already deleted STEP: Destroying namespace "nsdeletetest-2206" for this suite. • [SLOW TEST:6.207 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":142,"skipped":2411,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:52:51.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 8 13:52:51.109: INFO: Waiting up to 5m0s for pod "client-containers-65a7d207-a412-42bf-8af6-9cf5700fe6a5" in namespace "containers-5457" to be "success or failure" Mar 8 13:52:51.113: INFO: Pod "client-containers-65a7d207-a412-42bf-8af6-9cf5700fe6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553252ms Mar 8 13:52:53.118: INFO: Pod "client-containers-65a7d207-a412-42bf-8af6-9cf5700fe6a5": Phase="Running", Reason="", readiness=true. Elapsed: 2.008853242s Mar 8 13:52:55.121: INFO: Pod "client-containers-65a7d207-a412-42bf-8af6-9cf5700fe6a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012432052s STEP: Saw pod success Mar 8 13:52:55.121: INFO: Pod "client-containers-65a7d207-a412-42bf-8af6-9cf5700fe6a5" satisfied condition "success or failure" Mar 8 13:52:55.124: INFO: Trying to get logs from node kind-worker2 pod client-containers-65a7d207-a412-42bf-8af6-9cf5700fe6a5 container test-container: STEP: delete the pod Mar 8 13:52:55.151: INFO: Waiting for pod client-containers-65a7d207-a412-42bf-8af6-9cf5700fe6a5 to disappear Mar 8 13:52:55.155: INFO: Pod client-containers-65a7d207-a412-42bf-8af6-9cf5700fe6a5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:52:55.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5457" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2428,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:52:55.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 8 13:52:58.266: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:52:59.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6786" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":144,"skipped":2441,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:52:59.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:52:59.351: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8275e57-f269-4914-b30f-a2674a89bbcd" in namespace "projected-2093" to be "success or failure" Mar 8 13:52:59.357: INFO: Pod "downwardapi-volume-f8275e57-f269-4914-b30f-a2674a89bbcd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.574726ms Mar 8 13:53:01.361: INFO: Pod "downwardapi-volume-f8275e57-f269-4914-b30f-a2674a89bbcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009834528s STEP: Saw pod success Mar 8 13:53:01.361: INFO: Pod "downwardapi-volume-f8275e57-f269-4914-b30f-a2674a89bbcd" satisfied condition "success or failure" Mar 8 13:53:01.363: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-f8275e57-f269-4914-b30f-a2674a89bbcd container client-container: STEP: delete the pod Mar 8 13:53:01.398: INFO: Waiting for pod downwardapi-volume-f8275e57-f269-4914-b30f-a2674a89bbcd to disappear Mar 8 13:53:01.404: INFO: Pod downwardapi-volume-f8275e57-f269-4914-b30f-a2674a89bbcd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:53:01.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2093" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:53:01.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6388 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6388 STEP: Creating statefulset with conflicting port in namespace statefulset-6388 STEP: Waiting until pod test-pod will start running in namespace statefulset-6388 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6388 Mar 8 13:53:03.519: INFO: Observed stateful pod in namespace: statefulset-6388, name: ss-0, uid: abe17857-e15a-480e-8929-a44a6470c7e3, status phase: Pending. Waiting for statefulset controller to delete. Mar 8 13:53:09.402: INFO: Observed stateful pod in namespace: statefulset-6388, name: ss-0, uid: abe17857-e15a-480e-8929-a44a6470c7e3, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 13:53:09.409: INFO: Observed stateful pod in namespace: statefulset-6388, name: ss-0, uid: abe17857-e15a-480e-8929-a44a6470c7e3, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 13:53:09.419: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6388 STEP: Removing pod with conflicting port in namespace statefulset-6388 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6388 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 13:53:11.516: INFO: Deleting all statefulset in ns statefulset-6388 Mar 8 13:53:11.520: INFO: Scaling statefulset ss to 0 Mar 8 13:53:21.537: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 13:53:21.540: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:53:21.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6388" for this suite. • [SLOW TEST:20.156 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":146,"skipped":2471,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:53:21.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:53:21.631: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 8 13:53:21.644: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 8 13:53:26.648: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 13:53:26.648: INFO: Creating deployment "test-rolling-update-deployment" Mar 8 13:53:26.652: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 8 13:53:26.660: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 8 13:53:28.667: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 8 13:53:28.669: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 13:53:28.677: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2721 /apis/apps/v1/namespaces/deployment-2721/deployments/test-rolling-update-deployment 2f70cb5f-ec1b-4ba5-82c2-69b8d015d94f 20954 1 2020-03-08 13:53:26 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004155158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 13:53:26 +0000 UTC,LastTransitionTime:2020-03-08 13:53:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-08 13:53:28 +0000 UTC,LastTransitionTime:2020-03-08 13:53:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 13:53:28.680: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2721 /apis/apps/v1/namespaces/deployment-2721/replicasets/test-rolling-update-deployment-67cf4f6444 4b63a414-21a6-4d0a-9b7e-309d28b0a443 20942 1 2020-03-08 13:53:26 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2f70cb5f-ec1b-4ba5-82c2-69b8d015d94f 0xc004155797 0xc004155798}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004155878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 13:53:28.680: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 8 13:53:28.680: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2721 /apis/apps/v1/namespaces/deployment-2721/replicasets/test-rolling-update-controller 93349506-b9aa-4c01-a867-6ad94988717e 20952 2 2020-03-08 13:53:21 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2f70cb5f-ec1b-4ba5-82c2-69b8d015d94f 0xc004155697 0xc004155698}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0041556f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 13:53:28.684: INFO: Pod "test-rolling-update-deployment-67cf4f6444-4mhkm" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-4mhkm test-rolling-update-deployment-67cf4f6444- deployment-2721 /api/v1/namespaces/deployment-2721/pods/test-rolling-update-deployment-67cf4f6444-4mhkm bcf58081-48e1-41ee-a48e-a745f19a4060 20941 0 2020-03-08 13:53:26 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 4b63a414-21a6-4d0a-9b7e-309d28b0a443 0xc004155fd7 0xc004155fd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrt6d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrt6d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrt6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:53:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:53:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 13:53:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.130,StartTime:2020-03-08 13:53:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 13:53:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://df80744d86d2e06cce8d6645e5e34b3528dbae6027f6e0ce376f24e56602cc5b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:53:28.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2721" for this suite. • [SLOW TEST:7.124 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":147,"skipped":2492,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:53:28.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:53:28.739: INFO: Waiting up to 5m0s for pod "downwardapi-volume-072f20b9-4704-4fd6-86bf-c4833b12c279" in namespace "downward-api-6833" to be "success or failure" Mar 8 13:53:28.743: INFO: Pod "downwardapi-volume-072f20b9-4704-4fd6-86bf-c4833b12c279": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016546ms Mar 8 13:53:30.747: INFO: Pod "downwardapi-volume-072f20b9-4704-4fd6-86bf-c4833b12c279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007680513s STEP: Saw pod success Mar 8 13:53:30.747: INFO: Pod "downwardapi-volume-072f20b9-4704-4fd6-86bf-c4833b12c279" satisfied condition "success or failure" Mar 8 13:53:30.749: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-072f20b9-4704-4fd6-86bf-c4833b12c279 container client-container: STEP: delete the pod Mar 8 13:53:30.769: INFO: Waiting for pod downwardapi-volume-072f20b9-4704-4fd6-86bf-c4833b12c279 to disappear Mar 8 13:53:30.789: INFO: Pod downwardapi-volume-072f20b9-4704-4fd6-86bf-c4833b12c279 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:53:30.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6833" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2505,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:53:30.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:53:47.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4986" for this suite. • [SLOW TEST:16.259 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":149,"skipped":2519,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:53:47.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 13:53:50.127: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:53:50.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1924" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2520,"failed":0} ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:53:50.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:54:15.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1251" for this suite. • [SLOW TEST:25.351 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:54:15.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 8 13:54:15.572: INFO: Waiting up to 5m0s for pod "client-containers-4ed99791-8a1e-4e90-9519-b78910b332b8" in namespace "containers-577" to be "success or failure" Mar 8 13:54:15.604: INFO: Pod "client-containers-4ed99791-8a1e-4e90-9519-b78910b332b8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.390449ms Mar 8 13:54:17.608: INFO: Pod "client-containers-4ed99791-8a1e-4e90-9519-b78910b332b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036324049s STEP: Saw pod success Mar 8 13:54:17.608: INFO: Pod "client-containers-4ed99791-8a1e-4e90-9519-b78910b332b8" satisfied condition "success or failure" Mar 8 13:54:17.613: INFO: Trying to get logs from node kind-worker2 pod client-containers-4ed99791-8a1e-4e90-9519-b78910b332b8 container test-container: STEP: delete the pod Mar 8 13:54:17.654: INFO: Waiting for pod client-containers-4ed99791-8a1e-4e90-9519-b78910b332b8 to disappear Mar 8 13:54:17.661: INFO: Pod client-containers-4ed99791-8a1e-4e90-9519-b78910b332b8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:54:17.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-577" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2560,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:54:17.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-04135a67-0466-4b12-8a56-e66c99c7bdeb STEP: Creating a pod to test consume secrets Mar 8 13:54:17.728: INFO: Waiting up to 5m0s for pod "pod-secrets-2dc95d88-95c6-41cb-8817-c2be48b9aa48" in namespace "secrets-2739" to be "success or failure" Mar 8 13:54:17.733: INFO: Pod "pod-secrets-2dc95d88-95c6-41cb-8817-c2be48b9aa48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333736ms Mar 8 13:54:19.737: INFO: Pod "pod-secrets-2dc95d88-95c6-41cb-8817-c2be48b9aa48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008293031s STEP: Saw pod success Mar 8 13:54:19.737: INFO: Pod "pod-secrets-2dc95d88-95c6-41cb-8817-c2be48b9aa48" satisfied condition "success or failure" Mar 8 13:54:19.740: INFO: Trying to get logs from node kind-worker pod pod-secrets-2dc95d88-95c6-41cb-8817-c2be48b9aa48 container secret-volume-test: STEP: delete the pod Mar 8 13:54:19.770: INFO: Waiting for pod pod-secrets-2dc95d88-95c6-41cb-8817-c2be48b9aa48 to disappear Mar 8 13:54:19.774: INFO: Pod pod-secrets-2dc95d88-95c6-41cb-8817-c2be48b9aa48 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:54:19.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2739" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2575,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:54:19.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:54:19.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01f7fe18-2828-40a8-a9c8-9ec663d5bd00" in namespace "downward-api-1556" to be "success or failure" Mar 8 13:54:19.868: INFO: Pod "downwardapi-volume-01f7fe18-2828-40a8-a9c8-9ec663d5bd00": Phase="Pending", Reason="", readiness=false. Elapsed: 31.187942ms Mar 8 13:54:21.872: INFO: Pod "downwardapi-volume-01f7fe18-2828-40a8-a9c8-9ec663d5bd00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035311928s STEP: Saw pod success Mar 8 13:54:21.872: INFO: Pod "downwardapi-volume-01f7fe18-2828-40a8-a9c8-9ec663d5bd00" satisfied condition "success or failure" Mar 8 13:54:21.875: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-01f7fe18-2828-40a8-a9c8-9ec663d5bd00 container client-container: STEP: delete the pod Mar 8 13:54:21.893: INFO: Waiting for pod downwardapi-volume-01f7fe18-2828-40a8-a9c8-9ec663d5bd00 to disappear Mar 8 13:54:21.897: INFO: Pod downwardapi-volume-01f7fe18-2828-40a8-a9c8-9ec663d5bd00 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:54:21.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1556" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:54:21.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-909 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-909 Mar 8 13:54:21.975: INFO: Found 0 stateful pods, waiting for 1 Mar 8 13:54:31.979: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 13:54:31.999: INFO: Deleting all statefulset in ns statefulset-909 Mar 8 13:54:32.005: INFO: Scaling statefulset ss to 0 Mar 8 13:54:42.053: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 13:54:42.056: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:54:42.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-909" for this suite. • [SLOW TEST:20.173 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":155,"skipped":2645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:54:42.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-63fb3dda-5646-4850-a2e1-1e9b194a61fe STEP: Creating a pod to test consume secrets Mar 8 13:54:42.142: INFO: Waiting up to 5m0s for pod "pod-secrets-516592c2-fb2d-4f27-ab2b-192836336c63" in namespace "secrets-7102" to be "success or failure" Mar 8 13:54:42.147: INFO: Pod "pod-secrets-516592c2-fb2d-4f27-ab2b-192836336c63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192926ms Mar 8 13:54:44.150: INFO: Pod "pod-secrets-516592c2-fb2d-4f27-ab2b-192836336c63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008025346s STEP: Saw pod success Mar 8 13:54:44.150: INFO: Pod "pod-secrets-516592c2-fb2d-4f27-ab2b-192836336c63" satisfied condition "success or failure" Mar 8 13:54:44.153: INFO: Trying to get logs from node kind-worker pod pod-secrets-516592c2-fb2d-4f27-ab2b-192836336c63 container secret-volume-test: STEP: delete the pod Mar 8 13:54:44.191: INFO: Waiting for pod pod-secrets-516592c2-fb2d-4f27-ab2b-192836336c63 to disappear Mar 8 13:54:44.201: INFO: Pod pod-secrets-516592c2-fb2d-4f27-ab2b-192836336c63 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:54:44.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7102" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2747,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:54:44.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 13:54:52.318: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 13:54:52.323: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 13:54:54.323: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 13:54:54.327: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 13:54:56.323: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 13:54:56.328: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 13:54:58.323: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 13:54:58.328: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 13:55:00.323: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 13:55:00.327: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:55:00.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1243" for this suite. • [SLOW TEST:16.137 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2755,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:55:00.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:55:00.920: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 13:55:02.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272500, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272500, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272500, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:55:05.965: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:55:05.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:55:07.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6585" for this suite. STEP: Destroying namespace "webhook-6585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.909 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":158,"skipped":2757,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:55:07.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:55:07.821: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 13:55:09.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272507, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272507, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272507, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272507, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:55:12.858: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:55:13.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9805" for this suite. STEP: Destroying namespace "webhook-9805-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.068 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":159,"skipped":2760,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:55:13.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-2017cd8e-395d-4d39-9c88-3b2b3ea7a815 Mar 8 13:55:13.411: INFO: Pod name my-hostname-basic-2017cd8e-395d-4d39-9c88-3b2b3ea7a815: Found 0 pods out of 1 Mar 8 13:55:18.414: INFO: Pod name my-hostname-basic-2017cd8e-395d-4d39-9c88-3b2b3ea7a815: Found 1 pods out of 1 Mar 8 13:55:18.414: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2017cd8e-395d-4d39-9c88-3b2b3ea7a815" are running Mar 8 13:55:18.417: INFO: Pod "my-hostname-basic-2017cd8e-395d-4d39-9c88-3b2b3ea7a815-fkhs6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 13:55:13 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 13:55:15 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 13:55:15 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 13:55:13 +0000 UTC Reason: Message:}]) Mar 8 13:55:18.417: INFO: Trying to dial the pod Mar 8 13:55:23.431: INFO: Controller my-hostname-basic-2017cd8e-395d-4d39-9c88-3b2b3ea7a815: Got expected result from replica 1 [my-hostname-basic-2017cd8e-395d-4d39-9c88-3b2b3ea7a815-fkhs6]: "my-hostname-basic-2017cd8e-395d-4d39-9c88-3b2b3ea7a815-fkhs6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:55:23.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8496" for this suite. • [SLOW TEST:10.116 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":160,"skipped":2764,"failed":0} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:55:23.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-8247c484-479f-418f-bdfe-09b226ca4449 in namespace container-probe-5619 Mar 8 13:55:25.514: INFO: Started pod liveness-8247c484-479f-418f-bdfe-09b226ca4449 in namespace container-probe-5619 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 13:55:25.516: INFO: Initial restart count of pod liveness-8247c484-479f-418f-bdfe-09b226ca4449 is 0 Mar 8 13:55:41.549: INFO: Restart count of pod container-probe-5619/liveness-8247c484-479f-418f-bdfe-09b226ca4449 is now 1 (16.032821617s elapsed) Mar 8 13:56:01.586: INFO: Restart count of pod container-probe-5619/liveness-8247c484-479f-418f-bdfe-09b226ca4449 is now 2 (36.069854331s elapsed) Mar 8 13:56:21.634: INFO: Restart count of pod container-probe-5619/liveness-8247c484-479f-418f-bdfe-09b226ca4449 is now 3 (56.117943748s elapsed) Mar 8 13:56:41.674: INFO: Restart count of pod container-probe-5619/liveness-8247c484-479f-418f-bdfe-09b226ca4449 is now 4 (1m16.157593951s elapsed) Mar 8 13:57:43.799: INFO: Restart count of pod container-probe-5619/liveness-8247c484-479f-418f-bdfe-09b226ca4449 is now 5 (2m18.282936458s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:57:43.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5619" for this suite. • [SLOW TEST:140.401 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2767,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:57:43.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 13:57:43.913: INFO: Waiting up to 5m0s for pod "pod-6ebb5535-95aa-46e1-933d-b6436c001f50" in namespace "emptydir-2111" to be "success or failure" Mar 8 13:57:43.918: INFO: Pod "pod-6ebb5535-95aa-46e1-933d-b6436c001f50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273517ms Mar 8 13:57:45.922: INFO: Pod "pod-6ebb5535-95aa-46e1-933d-b6436c001f50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008771764s Mar 8 13:57:47.926: INFO: Pod "pod-6ebb5535-95aa-46e1-933d-b6436c001f50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012863224s STEP: Saw pod success Mar 8 13:57:47.926: INFO: Pod "pod-6ebb5535-95aa-46e1-933d-b6436c001f50" satisfied condition "success or failure" Mar 8 13:57:47.930: INFO: Trying to get logs from node kind-worker2 pod pod-6ebb5535-95aa-46e1-933d-b6436c001f50 container test-container: STEP: delete the pod Mar 8 13:57:47.967: INFO: Waiting for pod pod-6ebb5535-95aa-46e1-933d-b6436c001f50 to disappear Mar 8 13:57:47.971: INFO: Pod pod-6ebb5535-95aa-46e1-933d-b6436c001f50 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:57:47.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2111" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2767,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:57:47.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 13:57:48.055: INFO: Waiting up to 5m0s for pod "pod-f3c32b52-012e-46ef-adc6-84d7c241716f" in namespace "emptydir-7074" to be "success or failure" Mar 8 13:57:48.061: INFO: Pod "pod-f3c32b52-012e-46ef-adc6-84d7c241716f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074758ms Mar 8 13:57:50.065: INFO: Pod "pod-f3c32b52-012e-46ef-adc6-84d7c241716f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009905447s STEP: Saw pod success Mar 8 13:57:50.065: INFO: Pod "pod-f3c32b52-012e-46ef-adc6-84d7c241716f" satisfied condition "success or failure" Mar 8 13:57:50.068: INFO: Trying to get logs from node kind-worker pod pod-f3c32b52-012e-46ef-adc6-84d7c241716f container test-container: STEP: delete the pod Mar 8 13:57:50.105: INFO: Waiting for pod pod-f3c32b52-012e-46ef-adc6-84d7c241716f to disappear Mar 8 13:57:50.109: INFO: Pod pod-f3c32b52-012e-46ef-adc6-84d7c241716f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:57:50.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7074" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:57:50.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:06.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5298" for this suite. • [SLOW TEST:16.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":164,"skipped":2799,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:06.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 8 13:58:08.828: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7395 pod-service-account-3cede44f-b66b-4ac2-aa04-5e731a7fb6b2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 8 13:58:10.655: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7395 pod-service-account-3cede44f-b66b-4ac2-aa04-5e731a7fb6b2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 8 13:58:10.857: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7395 pod-service-account-3cede44f-b66b-4ac2-aa04-5e731a7fb6b2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:11.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7395" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":165,"skipped":2810,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:11.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 13:58:11.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2764636e-1dd4-4e9e-afa5-bd95315d7e3d" in namespace "downward-api-2222" to be "success or failure" Mar 8 13:58:11.121: INFO: Pod "downwardapi-volume-2764636e-1dd4-4e9e-afa5-bd95315d7e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.897683ms Mar 8 13:58:13.125: INFO: Pod "downwardapi-volume-2764636e-1dd4-4e9e-afa5-bd95315d7e3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008755052s STEP: Saw pod success Mar 8 13:58:13.125: INFO: Pod "downwardapi-volume-2764636e-1dd4-4e9e-afa5-bd95315d7e3d" satisfied condition "success or failure" Mar 8 13:58:13.127: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-2764636e-1dd4-4e9e-afa5-bd95315d7e3d container client-container: STEP: delete the pod Mar 8 13:58:13.161: INFO: Waiting for pod downwardapi-volume-2764636e-1dd4-4e9e-afa5-bd95315d7e3d to disappear Mar 8 13:58:13.176: INFO: Pod downwardapi-volume-2764636e-1dd4-4e9e-afa5-bd95315d7e3d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:13.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2222" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2829,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:13.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 13:58:13.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2152' Mar 8 13:58:13.343: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 13:58:13.343: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller Mar 8 13:58:13.380: INFO: scanned /root for discovery docs: Mar 8 13:58:13.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2152' Mar 8 13:58:29.235: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 13:58:29.235: INFO: stdout: "Created e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f\nScaling up e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 8 13:58:29.235: INFO: stdout: "Created e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f\nScaling up e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 8 13:58:29.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2152' Mar 8 13:58:29.361: INFO: stderr: "" Mar 8 13:58:29.361: INFO: stdout: "e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f-k2hdw " Mar 8 13:58:29.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f-k2hdw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2152' Mar 8 13:58:29.465: INFO: stderr: "" Mar 8 13:58:29.465: INFO: stdout: "true" Mar 8 13:58:29.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f-k2hdw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2152' Mar 8 13:58:29.570: INFO: stderr: "" Mar 8 13:58:29.570: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 8 13:58:29.570: INFO: e2e-test-httpd-rc-cc8914e0683861c65f64e452586a0c7f-k2hdw is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678 Mar 8 13:58:29.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2152' Mar 8 13:58:29.669: INFO: stderr: "" Mar 8 13:58:29.669: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:29.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2152" for this suite. • [SLOW TEST:16.506 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":167,"skipped":2833,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:29.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 8 13:58:29.764: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7664 /api/v1/namespaces/watch-7664/configmaps/e2e-watch-test-label-changed cb5e618c-f0ca-418e-a259-cfe77938e948 22670 0 2020-03-08 13:58:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 13:58:29.764: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7664 /api/v1/namespaces/watch-7664/configmaps/e2e-watch-test-label-changed cb5e618c-f0ca-418e-a259-cfe77938e948 22671 0 2020-03-08 13:58:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 8 13:58:29.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7664 /api/v1/namespaces/watch-7664/configmaps/e2e-watch-test-label-changed cb5e618c-f0ca-418e-a259-cfe77938e948 22672 0 2020-03-08 13:58:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 8 13:58:39.817: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7664 /api/v1/namespaces/watch-7664/configmaps/e2e-watch-test-label-changed cb5e618c-f0ca-418e-a259-cfe77938e948 22723 0 2020-03-08 13:58:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 13:58:39.817: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7664 /api/v1/namespaces/watch-7664/configmaps/e2e-watch-test-label-changed cb5e618c-f0ca-418e-a259-cfe77938e948 22724 0 2020-03-08 13:58:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 8 13:58:39.817: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7664 /api/v1/namespaces/watch-7664/configmaps/e2e-watch-test-label-changed cb5e618c-f0ca-418e-a259-cfe77938e948 22725 0 2020-03-08 13:58:29 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:39.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7664" for this suite. • [SLOW TEST:10.132 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":168,"skipped":2851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:39.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-84387dd6-f08f-410f-8def-b3694e1e4662 STEP: Creating a pod to test consume configMaps Mar 8 13:58:39.879: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-335a0c59-c748-4efc-82db-b1d8c33e0b06" in namespace "projected-5148" to be "success or failure" Mar 8 13:58:39.883: INFO: Pod "pod-projected-configmaps-335a0c59-c748-4efc-82db-b1d8c33e0b06": Phase="Pending", Reason="", readiness=false. Elapsed: 3.701521ms Mar 8 13:58:41.887: INFO: Pod "pod-projected-configmaps-335a0c59-c748-4efc-82db-b1d8c33e0b06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007699072s STEP: Saw pod success Mar 8 13:58:41.887: INFO: Pod "pod-projected-configmaps-335a0c59-c748-4efc-82db-b1d8c33e0b06" satisfied condition "success or failure" Mar 8 13:58:41.889: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-335a0c59-c748-4efc-82db-b1d8c33e0b06 container projected-configmap-volume-test: STEP: delete the pod Mar 8 13:58:41.915: INFO: Waiting for pod pod-projected-configmaps-335a0c59-c748-4efc-82db-b1d8c33e0b06 to disappear Mar 8 13:58:41.931: INFO: Pod pod-projected-configmaps-335a0c59-c748-4efc-82db-b1d8c33e0b06 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:41.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5148" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:41.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-6bd45306-8c20-4fe2-bc11-0f2201addd6b STEP: Creating secret with name secret-projected-all-test-volume-88f3cd08-e528-4789-9b38-c5104cb6519a STEP: Creating a pod to test Check all projections for projected volume plugin Mar 8 13:58:42.000: INFO: Waiting up to 5m0s for pod "projected-volume-b65d40ea-1963-45f0-a799-65511c791432" in namespace "projected-6115" to be "success or failure" Mar 8 13:58:42.004: INFO: Pod "projected-volume-b65d40ea-1963-45f0-a799-65511c791432": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099582ms Mar 8 13:58:44.008: INFO: Pod "projected-volume-b65d40ea-1963-45f0-a799-65511c791432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007920856s STEP: Saw pod success Mar 8 13:58:44.008: INFO: Pod "projected-volume-b65d40ea-1963-45f0-a799-65511c791432" satisfied condition "success or failure" Mar 8 13:58:44.011: INFO: Trying to get logs from node kind-worker2 pod projected-volume-b65d40ea-1963-45f0-a799-65511c791432 container projected-all-volume-test: STEP: delete the pod Mar 8 13:58:44.044: INFO: Waiting for pod projected-volume-b65d40ea-1963-45f0-a799-65511c791432 to disappear Mar 8 13:58:44.052: INFO: Pod projected-volume-b65d40ea-1963-45f0-a799-65511c791432 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:44.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6115" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2909,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:44.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 13:58:44.118: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 13:58:44.129: INFO: Waiting for terminating namespaces to be deleted... Mar 8 13:58:44.132: INFO: Logging pods the kubelet thinks is on node kind-worker before test Mar 8 13:58:44.137: INFO: kindnet-p9whg from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:58:44.138: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 13:58:44.138: INFO: kube-proxy-pz8tf from kube-system started at 2020-03-08 12:58:54 +0000 UTC (1 container statuses recorded) Mar 8 13:58:44.138: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 13:58:44.138: INFO: Logging pods the kubelet thinks is on node kind-worker2 before test Mar 8 13:58:44.144: INFO: kube-proxy-vfcnx from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:58:44.144: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 13:58:44.144: INFO: kindnet-mjfxb from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 13:58:44.144: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f3952141-da38-4637-9c3f-50db94c24e90 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-f3952141-da38-4637-9c3f-50db94c24e90 off the node kind-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f3952141-da38-4637-9c3f-50db94c24e90 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:52.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3744" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.247 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":171,"skipped":2931,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:52.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 13:58:52.349: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:58:58.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8697" for this suite. • [SLOW TEST:5.880 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":172,"skipped":2935,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:58:58.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 13:58:58.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 8 13:59:00.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272738, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272738, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272738, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719272738, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 13:59:03.914: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 13:59:16.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-835" for this suite. STEP: Destroying namespace "webhook-835-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.982 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":173,"skipped":2946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 13:59:16.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2383 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 8 13:59:16.227: INFO: Found 0 stateful pods, waiting for 3 Mar 8 13:59:26.232: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:59:26.232: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:59:26.232: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 8 13:59:26.261: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 8 13:59:36.298: INFO: Updating stateful set ss2 Mar 8 13:59:36.311: INFO: Waiting for Pod statefulset-2383/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 8 13:59:46.431: INFO: Found 2 stateful pods, waiting for 3 Mar 8 13:59:56.436: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:59:56.437: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 13:59:56.437: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 8 13:59:56.464: INFO: Updating stateful set ss2 Mar 8 13:59:56.485: INFO: Waiting for Pod statefulset-2383/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 14:00:06.493: INFO: Waiting for Pod statefulset-2383/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 8 14:00:16.510: INFO: Updating stateful set ss2 Mar 8 14:00:16.544: INFO: Waiting for StatefulSet statefulset-2383/ss2 to complete update Mar 8 14:00:16.544: INFO: Waiting for Pod statefulset-2383/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 14:00:26.552: INFO: Deleting all statefulset in ns statefulset-2383 Mar 8 14:00:26.555: INFO: Scaling statefulset ss2 to 0 Mar 8 14:00:46.579: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 14:00:46.585: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:00:46.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2383" for this suite. • [SLOW TEST:90.446 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":174,"skipped":2972,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:00:46.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0308 14:01:17.219803 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 14:01:17.219: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:01:17.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9224" for this suite. • [SLOW TEST:30.609 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":175,"skipped":2979,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:01:17.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 8 14:01:19.322: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3591 PodName:pod-sharedvolume-5aa1d9b3-2628-4431-973f-910534be76b4 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:01:19.322: INFO: >>> kubeConfig: /root/.kube/config I0308 14:01:19.357334 6 log.go:172] (0xc005670370) (0xc0022fb7c0) Create stream I0308 14:01:19.357362 6 log.go:172] (0xc005670370) (0xc0022fb7c0) Stream added, broadcasting: 1 I0308 14:01:19.361707 6 log.go:172] (0xc005670370) Reply frame received for 1 I0308 14:01:19.361757 6 log.go:172] (0xc005670370) (0xc0022fb860) Create stream I0308 14:01:19.361778 6 log.go:172] (0xc005670370) (0xc0022fb860) Stream added, broadcasting: 3 I0308 14:01:19.363079 6 log.go:172] (0xc005670370) Reply frame received for 3 I0308 14:01:19.363124 6 log.go:172] (0xc005670370) (0xc0000fea00) Create stream I0308 14:01:19.363141 6 log.go:172] (0xc005670370) (0xc0000fea00) Stream added, broadcasting: 5 I0308 14:01:19.364238 6 log.go:172] (0xc005670370) Reply frame received for 5 I0308 14:01:19.426007 6 log.go:172] (0xc005670370) Data frame received for 3 I0308 14:01:19.426034 6 log.go:172] (0xc0022fb860) (3) Data frame handling I0308 14:01:19.426070 6 log.go:172] (0xc005670370) Data frame received for 5 I0308 14:01:19.426143 6 log.go:172] (0xc0000fea00) (5) Data frame handling I0308 14:01:19.426203 6 log.go:172] (0xc0022fb860) (3) Data frame sent I0308 14:01:19.426225 6 log.go:172] (0xc005670370) Data frame received for 3 I0308 14:01:19.426263 6 log.go:172] (0xc0022fb860) (3) Data frame handling I0308 14:01:19.427613 6 log.go:172] (0xc005670370) Data frame received for 1 I0308 14:01:19.427635 6 log.go:172] (0xc0022fb7c0) (1) Data frame handling I0308 14:01:19.427648 6 log.go:172] (0xc0022fb7c0) (1) Data frame sent I0308 14:01:19.427664 6 log.go:172] (0xc005670370) (0xc0022fb7c0) Stream removed, broadcasting: 1 I0308 14:01:19.427884 6 log.go:172] (0xc005670370) (0xc0022fb7c0) Stream removed, broadcasting: 1 I0308 14:01:19.427931 6 log.go:172] (0xc005670370) (0xc0022fb860) Stream removed, broadcasting: 3 I0308 14:01:19.427956 6 log.go:172] (0xc005670370) (0xc0000fea00) Stream removed, broadcasting: 5 Mar 8 14:01:19.427: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:01:19.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0308 14:01:19.428071 6 log.go:172] (0xc005670370) Go away received STEP: Destroying namespace "emptydir-3591" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":176,"skipped":3005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:01:19.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:01:30.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4252" for this suite. • [SLOW TEST:11.132 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":177,"skipped":3073,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:01:30.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 8 14:01:30.678: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5228 /api/v1/namespaces/watch-5228/configmaps/e2e-watch-test-resource-version e97323ef-5e91-4c56-aa23-82a9c14fdb52 23871 0 2020-03-08 14:01:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 14:01:30.678: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5228 /api/v1/namespaces/watch-5228/configmaps/e2e-watch-test-resource-version e97323ef-5e91-4c56-aa23-82a9c14fdb52 23872 0 2020-03-08 14:01:30 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:01:30.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5228" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":178,"skipped":3076,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:01:30.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-70ea4f54-c8ef-4125-be0f-b843ba2d4547 in namespace container-probe-9262 Mar 8 14:01:32.741: INFO: Started pod liveness-70ea4f54-c8ef-4125-be0f-b843ba2d4547 in namespace container-probe-9262 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 14:01:32.743: INFO: Initial restart count of pod liveness-70ea4f54-c8ef-4125-be0f-b843ba2d4547 is 0 Mar 8 14:01:52.784: INFO: Restart count of pod container-probe-9262/liveness-70ea4f54-c8ef-4125-be0f-b843ba2d4547 is now 1 (20.040641817s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:01:52.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9262" for this suite. • [SLOW TEST:22.134 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":3079,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:01:52.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-02c6a75e-82bd-4e6d-a54a-7463763a26cd STEP: Creating a pod to test consume configMaps Mar 8 14:01:52.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8ac1863-d08d-4d9f-aea3-814af8b6ae28" in namespace "configmap-5400" to be "success or failure" Mar 8 14:01:52.882: INFO: Pod "pod-configmaps-c8ac1863-d08d-4d9f-aea3-814af8b6ae28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292284ms Mar 8 14:01:54.885: INFO: Pod "pod-configmaps-c8ac1863-d08d-4d9f-aea3-814af8b6ae28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007557768s Mar 8 14:01:56.889: INFO: Pod "pod-configmaps-c8ac1863-d08d-4d9f-aea3-814af8b6ae28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010854882s STEP: Saw pod success Mar 8 14:01:56.889: INFO: Pod "pod-configmaps-c8ac1863-d08d-4d9f-aea3-814af8b6ae28" satisfied condition "success or failure" Mar 8 14:01:56.891: INFO: Trying to get logs from node kind-worker pod pod-configmaps-c8ac1863-d08d-4d9f-aea3-814af8b6ae28 container configmap-volume-test: STEP: delete the pod Mar 8 14:01:56.942: INFO: Waiting for pod pod-configmaps-c8ac1863-d08d-4d9f-aea3-814af8b6ae28 to disappear Mar 8 14:01:56.945: INFO: Pod pod-configmaps-c8ac1863-d08d-4d9f-aea3-814af8b6ae28 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:01:56.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5400" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3098,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:01:56.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 14:01:57.044: INFO: Waiting up to 5m0s for pod "pod-f9496f32-c5ce-4a66-ae98-a5d5f124bb71" in namespace "emptydir-6673" to be "success or failure" Mar 8 14:01:57.056: INFO: Pod "pod-f9496f32-c5ce-4a66-ae98-a5d5f124bb71": Phase="Pending", Reason="", readiness=false. Elapsed: 11.539444ms Mar 8 14:01:59.060: INFO: Pod "pod-f9496f32-c5ce-4a66-ae98-a5d5f124bb71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015165302s Mar 8 14:02:01.063: INFO: Pod "pod-f9496f32-c5ce-4a66-ae98-a5d5f124bb71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018497587s STEP: Saw pod success Mar 8 14:02:01.063: INFO: Pod "pod-f9496f32-c5ce-4a66-ae98-a5d5f124bb71" satisfied condition "success or failure" Mar 8 14:02:01.065: INFO: Trying to get logs from node kind-worker2 pod pod-f9496f32-c5ce-4a66-ae98-a5d5f124bb71 container test-container: STEP: delete the pod Mar 8 14:02:01.093: INFO: Waiting for pod pod-f9496f32-c5ce-4a66-ae98-a5d5f124bb71 to disappear Mar 8 14:02:01.098: INFO: Pod pod-f9496f32-c5ce-4a66-ae98-a5d5f124bb71 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:02:01.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6673" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3118,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:02:01.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 14:02:01.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07a57450-6b97-483d-a77c-8c7f078ae171" in namespace "projected-1839" to be "success or failure" Mar 8 14:02:01.180: INFO: Pod "downwardapi-volume-07a57450-6b97-483d-a77c-8c7f078ae171": Phase="Pending", Reason="", readiness=false. Elapsed: 18.937494ms Mar 8 14:02:03.184: INFO: Pod "downwardapi-volume-07a57450-6b97-483d-a77c-8c7f078ae171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023218126s STEP: Saw pod success Mar 8 14:02:03.184: INFO: Pod "downwardapi-volume-07a57450-6b97-483d-a77c-8c7f078ae171" satisfied condition "success or failure" Mar 8 14:02:03.186: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-07a57450-6b97-483d-a77c-8c7f078ae171 container client-container: STEP: delete the pod Mar 8 14:02:03.201: INFO: Waiting for pod downwardapi-volume-07a57450-6b97-483d-a77c-8c7f078ae171 to disappear Mar 8 14:02:03.205: INFO: Pod downwardapi-volume-07a57450-6b97-483d-a77c-8c7f078ae171 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:02:03.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1839" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3119,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:02:03.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-85196221-1b39-4e43-9c97-0d563e4885de [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:02:03.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3485" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":183,"skipped":3126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:02:03.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-556 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-556 I0308 14:02:03.368183 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-556, replica count: 2 I0308 14:02:06.418559 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 14:02:06.418: INFO: Creating new exec pod Mar 8 14:02:09.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-556 execpodcz4xz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 8 14:02:09.701: INFO: stderr: "I0308 14:02:09.624374 2422 log.go:172] (0xc0009e8580) (0xc000683c20) Create stream\nI0308 14:02:09.624429 2422 log.go:172] (0xc0009e8580) (0xc000683c20) Stream added, broadcasting: 1\nI0308 14:02:09.626928 2422 log.go:172] (0xc0009e8580) Reply frame received for 1\nI0308 14:02:09.626959 2422 log.go:172] (0xc0009e8580) (0xc000620640) Create stream\nI0308 14:02:09.626969 2422 log.go:172] (0xc0009e8580) (0xc000620640) Stream added, broadcasting: 3\nI0308 14:02:09.627905 2422 log.go:172] (0xc0009e8580) Reply frame received for 3\nI0308 14:02:09.627928 2422 log.go:172] (0xc0009e8580) (0xc000a32000) Create stream\nI0308 14:02:09.627935 2422 log.go:172] (0xc0009e8580) (0xc000a32000) Stream added, broadcasting: 5\nI0308 14:02:09.628819 2422 log.go:172] (0xc0009e8580) Reply frame received for 5\nI0308 14:02:09.693912 2422 log.go:172] (0xc0009e8580) Data frame received for 5\nI0308 14:02:09.693940 2422 log.go:172] (0xc000a32000) (5) Data frame handling\nI0308 14:02:09.693962 2422 log.go:172] (0xc000a32000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0308 14:02:09.696138 2422 log.go:172] (0xc0009e8580) Data frame received for 5\nI0308 14:02:09.696250 2422 log.go:172] (0xc000a32000) (5) Data frame handling\nI0308 14:02:09.696317 2422 log.go:172] (0xc000a32000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0308 14:02:09.696570 2422 log.go:172] (0xc0009e8580) Data frame received for 5\nI0308 14:02:09.696585 2422 log.go:172] (0xc000a32000) (5) Data frame handling\nI0308 14:02:09.696708 2422 log.go:172] (0xc0009e8580) Data frame received for 3\nI0308 14:02:09.696739 2422 log.go:172] (0xc000620640) (3) Data frame handling\nI0308 14:02:09.698510 2422 log.go:172] (0xc0009e8580) Data frame received for 1\nI0308 14:02:09.698535 2422 log.go:172] (0xc000683c20) (1) Data frame handling\nI0308 14:02:09.698547 2422 log.go:172] (0xc000683c20) (1) Data frame sent\nI0308 14:02:09.698562 2422 log.go:172] (0xc0009e8580) (0xc000683c20) Stream removed, broadcasting: 1\nI0308 14:02:09.698755 2422 log.go:172] (0xc0009e8580) Go away received\nI0308 14:02:09.698888 2422 log.go:172] (0xc0009e8580) (0xc000683c20) Stream removed, broadcasting: 1\nI0308 14:02:09.698906 2422 log.go:172] (0xc0009e8580) (0xc000620640) Stream removed, broadcasting: 3\nI0308 14:02:09.698915 2422 log.go:172] (0xc0009e8580) (0xc000a32000) Stream removed, broadcasting: 5\n" Mar 8 14:02:09.701: INFO: stdout: "" Mar 8 14:02:09.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-556 execpodcz4xz -- /bin/sh -x -c nc -zv -t -w 2 10.96.33.1 80' Mar 8 14:02:09.914: INFO: stderr: "I0308 14:02:09.854883 2442 log.go:172] (0xc000a52580) (0xc0009ba000) Create stream\nI0308 14:02:09.854929 2442 log.go:172] (0xc000a52580) (0xc0009ba000) Stream added, broadcasting: 1\nI0308 14:02:09.857291 2442 log.go:172] (0xc000a52580) Reply frame received for 1\nI0308 14:02:09.857323 2442 log.go:172] (0xc000a52580) (0xc000717a40) Create stream\nI0308 14:02:09.857337 2442 log.go:172] (0xc000a52580) (0xc000717a40) Stream added, broadcasting: 3\nI0308 14:02:09.858215 2442 log.go:172] (0xc000a52580) Reply frame received for 3\nI0308 14:02:09.858291 2442 log.go:172] (0xc000a52580) (0xc0009ba0a0) Create stream\nI0308 14:02:09.858317 2442 log.go:172] (0xc000a52580) (0xc0009ba0a0) Stream added, broadcasting: 5\nI0308 14:02:09.859417 2442 log.go:172] (0xc000a52580) Reply frame received for 5\nI0308 14:02:09.909861 2442 log.go:172] (0xc000a52580) Data frame received for 3\nI0308 14:02:09.909904 2442 log.go:172] (0xc000717a40) (3) Data frame handling\nI0308 14:02:09.909932 2442 log.go:172] (0xc000a52580) Data frame received for 5\nI0308 14:02:09.909947 2442 log.go:172] (0xc0009ba0a0) (5) Data frame handling\nI0308 14:02:09.909962 2442 log.go:172] (0xc0009ba0a0) (5) Data frame sent\nI0308 14:02:09.909976 2442 log.go:172] (0xc000a52580) Data frame received for 5\nI0308 14:02:09.910002 2442 log.go:172] (0xc0009ba0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.33.1 80\nConnection to 10.96.33.1 80 port [tcp/http] succeeded!\nI0308 14:02:09.911580 2442 log.go:172] (0xc000a52580) Data frame received for 1\nI0308 14:02:09.911607 2442 log.go:172] (0xc0009ba000) (1) Data frame handling\nI0308 14:02:09.911657 2442 log.go:172] (0xc0009ba000) (1) Data frame sent\nI0308 14:02:09.911681 2442 log.go:172] (0xc000a52580) (0xc0009ba000) Stream removed, broadcasting: 1\nI0308 14:02:09.911705 2442 log.go:172] (0xc000a52580) Go away received\nI0308 14:02:09.911982 2442 log.go:172] (0xc000a52580) (0xc0009ba000) Stream removed, broadcasting: 1\nI0308 14:02:09.912001 2442 log.go:172] (0xc000a52580) (0xc000717a40) Stream removed, broadcasting: 3\nI0308 14:02:09.912010 2442 log.go:172] (0xc000a52580) (0xc0009ba0a0) Stream removed, broadcasting: 5\n" Mar 8 14:02:09.914: INFO: stdout: "" Mar 8 14:02:09.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-556 execpodcz4xz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 30713' Mar 8 14:02:10.114: INFO: stderr: "I0308 14:02:10.048884 2464 log.go:172] (0xc000a9a0b0) (0xc0009ce000) Create stream\nI0308 14:02:10.048930 2464 log.go:172] (0xc000a9a0b0) (0xc0009ce000) Stream added, broadcasting: 1\nI0308 14:02:10.052451 2464 log.go:172] (0xc000a9a0b0) Reply frame received for 1\nI0308 14:02:10.052540 2464 log.go:172] (0xc000a9a0b0) (0xc0007119a0) Create stream\nI0308 14:02:10.052578 2464 log.go:172] (0xc000a9a0b0) (0xc0007119a0) Stream added, broadcasting: 3\nI0308 14:02:10.056419 2464 log.go:172] (0xc000a9a0b0) Reply frame received for 3\nI0308 14:02:10.056457 2464 log.go:172] (0xc000a9a0b0) (0xc000711b80) Create stream\nI0308 14:02:10.056466 2464 log.go:172] (0xc000a9a0b0) (0xc000711b80) Stream added, broadcasting: 5\nI0308 14:02:10.057218 2464 log.go:172] (0xc000a9a0b0) Reply frame received for 5\nI0308 14:02:10.108270 2464 log.go:172] (0xc000a9a0b0) Data frame received for 5\nI0308 14:02:10.108348 2464 log.go:172] (0xc000711b80) (5) Data frame handling\nI0308 14:02:10.108368 2464 log.go:172] (0xc000711b80) (5) Data frame sent\nI0308 14:02:10.108382 2464 log.go:172] (0xc000a9a0b0) Data frame received for 5\nI0308 14:02:10.108393 2464 log.go:172] (0xc000711b80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.4 30713\nConnection to 172.17.0.4 30713 port [tcp/30713] succeeded!\nI0308 14:02:10.108418 2464 log.go:172] (0xc000711b80) (5) Data frame sent\nI0308 14:02:10.108585 2464 log.go:172] (0xc000a9a0b0) Data frame received for 5\nI0308 14:02:10.108611 2464 log.go:172] (0xc000a9a0b0) Data frame received for 3\nI0308 14:02:10.108642 2464 log.go:172] (0xc0007119a0) (3) Data frame handling\nI0308 14:02:10.108670 2464 log.go:172] (0xc000711b80) (5) Data frame handling\nI0308 14:02:10.110740 2464 log.go:172] (0xc000a9a0b0) Data frame received for 1\nI0308 14:02:10.110756 2464 log.go:172] (0xc0009ce000) (1) Data frame handling\nI0308 14:02:10.110765 2464 log.go:172] (0xc0009ce000) (1) Data frame sent\nI0308 14:02:10.110774 2464 log.go:172] (0xc000a9a0b0) (0xc0009ce000) Stream removed, broadcasting: 1\nI0308 14:02:10.110784 2464 log.go:172] (0xc000a9a0b0) Go away received\nI0308 14:02:10.111167 2464 log.go:172] (0xc000a9a0b0) (0xc0009ce000) Stream removed, broadcasting: 1\nI0308 14:02:10.111191 2464 log.go:172] (0xc000a9a0b0) (0xc0007119a0) Stream removed, broadcasting: 3\nI0308 14:02:10.111203 2464 log.go:172] (0xc000a9a0b0) (0xc000711b80) Stream removed, broadcasting: 5\n" Mar 8 14:02:10.114: INFO: stdout: "" Mar 8 14:02:10.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-556 execpodcz4xz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 30713' Mar 8 14:02:10.300: INFO: stderr: "I0308 14:02:10.239099 2485 log.go:172] (0xc000528d10) (0xc0006e5a40) Create stream\nI0308 14:02:10.239138 2485 log.go:172] (0xc000528d10) (0xc0006e5a40) Stream added, broadcasting: 1\nI0308 14:02:10.240991 2485 log.go:172] (0xc000528d10) Reply frame received for 1\nI0308 14:02:10.241014 2485 log.go:172] (0xc000528d10) (0xc000988000) Create stream\nI0308 14:02:10.241023 2485 log.go:172] (0xc000528d10) (0xc000988000) Stream added, broadcasting: 3\nI0308 14:02:10.241960 2485 log.go:172] (0xc000528d10) Reply frame received for 3\nI0308 14:02:10.241979 2485 log.go:172] (0xc000528d10) (0xc0006e5c20) Create stream\nI0308 14:02:10.241987 2485 log.go:172] (0xc000528d10) (0xc0006e5c20) Stream added, broadcasting: 5\nI0308 14:02:10.242866 2485 log.go:172] (0xc000528d10) Reply frame received for 5\nI0308 14:02:10.296290 2485 log.go:172] (0xc000528d10) Data frame received for 3\nI0308 14:02:10.296322 2485 log.go:172] (0xc000988000) (3) Data frame handling\nI0308 14:02:10.296342 2485 log.go:172] (0xc000528d10) Data frame received for 5\nI0308 14:02:10.296351 2485 log.go:172] (0xc0006e5c20) (5) Data frame handling\nI0308 14:02:10.296360 2485 log.go:172] (0xc0006e5c20) (5) Data frame sent\nI0308 14:02:10.296369 2485 log.go:172] (0xc000528d10) Data frame received for 5\nI0308 14:02:10.296376 2485 log.go:172] (0xc0006e5c20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.5 30713\nConnection to 172.17.0.5 30713 port [tcp/30713] succeeded!\nI0308 14:02:10.298009 2485 log.go:172] (0xc000528d10) Data frame received for 1\nI0308 14:02:10.298037 2485 log.go:172] (0xc0006e5a40) (1) Data frame handling\nI0308 14:02:10.298055 2485 log.go:172] (0xc0006e5a40) (1) Data frame sent\nI0308 14:02:10.298072 2485 log.go:172] (0xc000528d10) (0xc0006e5a40) Stream removed, broadcasting: 1\nI0308 14:02:10.298094 2485 log.go:172] (0xc000528d10) Go away received\nI0308 14:02:10.298442 2485 log.go:172] (0xc000528d10) (0xc0006e5a40) Stream removed, broadcasting: 1\nI0308 14:02:10.298464 2485 log.go:172] (0xc000528d10) (0xc000988000) Stream removed, broadcasting: 3\nI0308 14:02:10.298473 2485 log.go:172] (0xc000528d10) (0xc0006e5c20) Stream removed, broadcasting: 5\n" Mar 8 14:02:10.300: INFO: stdout: "" Mar 8 14:02:10.300: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:02:10.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-556" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:7.074 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":184,"skipped":3153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:02:10.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 14:02:10.897: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 14:02:13.935: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:02:13.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-267" for this suite. STEP: Destroying namespace "webhook-267-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":185,"skipped":3176,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:02:14.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:02:14.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 8 14:02:14.714: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T14:02:14Z generation:1 name:name1 resourceVersion:24231 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:85781a64-ad9a-44a8-97a7-2ff2709b7105] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 8 14:02:24.719: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T14:02:24Z generation:1 name:name2 resourceVersion:24304 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1b14a04c-b1f4-4efe-81c8-0c24a1ce2525] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 8 14:02:34.725: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T14:02:14Z generation:2 name:name1 resourceVersion:24331 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:85781a64-ad9a-44a8-97a7-2ff2709b7105] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 8 14:02:44.731: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T14:02:24Z generation:2 name:name2 resourceVersion:24361 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1b14a04c-b1f4-4efe-81c8-0c24a1ce2525] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 8 14:02:54.738: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T14:02:14Z generation:2 name:name1 resourceVersion:24391 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:85781a64-ad9a-44a8-97a7-2ff2709b7105] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 8 14:03:04.745: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-08T14:02:24Z generation:2 name:name2 resourceVersion:24421 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1b14a04c-b1f4-4efe-81c8-0c24a1ce2525] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:03:15.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2228" for this suite. • [SLOW TEST:61.249 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":186,"skipped":3190,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:03:15.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5539.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5539.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5539.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5539.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5539.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5539.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 14:03:17.387: INFO: DNS probes using dns-5539/dns-test-c69d0403-566f-4612-a497-6bef25ad15cf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:03:17.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5539" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":187,"skipped":3192,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:03:17.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 8 14:03:17.533: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-a f7edccb5-244d-40fd-ad3a-29485dc00b64 24495 0 2020-03-08 14:03:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 14:03:17.533: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-a f7edccb5-244d-40fd-ad3a-29485dc00b64 24495 0 2020-03-08 14:03:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 8 14:03:27.541: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-a f7edccb5-244d-40fd-ad3a-29485dc00b64 24547 0 2020-03-08 14:03:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 8 14:03:27.541: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-a f7edccb5-244d-40fd-ad3a-29485dc00b64 24547 0 2020-03-08 14:03:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 8 14:03:37.548: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-a f7edccb5-244d-40fd-ad3a-29485dc00b64 24577 0 2020-03-08 14:03:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 14:03:37.548: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-a f7edccb5-244d-40fd-ad3a-29485dc00b64 24577 0 2020-03-08 14:03:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 8 14:03:47.555: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-a f7edccb5-244d-40fd-ad3a-29485dc00b64 24608 0 2020-03-08 14:03:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 14:03:47.555: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-a f7edccb5-244d-40fd-ad3a-29485dc00b64 24608 0 2020-03-08 14:03:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 8 14:03:57.562: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-b 9ccff8ca-4d96-4607-aeb5-863921e34a30 24638 0 2020-03-08 14:03:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 14:03:57.562: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-b 9ccff8ca-4d96-4607-aeb5-863921e34a30 24638 0 2020-03-08 14:03:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 8 14:04:07.569: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-b 9ccff8ca-4d96-4607-aeb5-863921e34a30 24668 0 2020-03-08 14:03:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 14:04:07.569: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7151 /api/v1/namespaces/watch-7151/configmaps/e2e-watch-test-configmap-b 9ccff8ca-4d96-4607-aeb5-863921e34a30 24668 0 2020-03-08 14:03:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:04:17.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7151" for this suite. • [SLOW TEST:60.133 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":188,"skipped":3196,"failed":0} [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:04:17.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 14:04:17.653: INFO: Waiting up to 5m0s for pod "pod-c70bb5c0-c5b4-4818-91f9-ea17e9ee6b7c" in namespace "emptydir-302" to be "success or failure" Mar 8 14:04:17.657: INFO: Pod "pod-c70bb5c0-c5b4-4818-91f9-ea17e9ee6b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355704ms Mar 8 14:04:19.661: INFO: Pod "pod-c70bb5c0-c5b4-4818-91f9-ea17e9ee6b7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00833653s STEP: Saw pod success Mar 8 14:04:19.661: INFO: Pod "pod-c70bb5c0-c5b4-4818-91f9-ea17e9ee6b7c" satisfied condition "success or failure" Mar 8 14:04:19.664: INFO: Trying to get logs from node kind-worker pod pod-c70bb5c0-c5b4-4818-91f9-ea17e9ee6b7c container test-container: STEP: delete the pod Mar 8 14:04:19.695: INFO: Waiting for pod pod-c70bb5c0-c5b4-4818-91f9-ea17e9ee6b7c to disappear Mar 8 14:04:19.713: INFO: Pod pod-c70bb5c0-c5b4-4818-91f9-ea17e9ee6b7c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:04:19.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-302" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3196,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:04:19.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 14:04:19.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-651' Mar 8 14:04:19.903: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 14:04:19.903: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Mar 8 14:04:23.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-651' Mar 8 14:04:24.037: INFO: stderr: "" Mar 8 14:04:24.037: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:04:24.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-651" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":190,"skipped":3196,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:04:24.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:04:24.117: INFO: (0) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.56589ms) Mar 8 14:04:24.120: INFO: (1) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.867934ms) Mar 8 14:04:24.123: INFO: (2) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.282719ms) Mar 8 14:04:24.126: INFO: (3) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.280878ms) Mar 8 14:04:24.129: INFO: (4) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.756163ms) Mar 8 14:04:24.131: INFO: (5) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.612054ms) Mar 8 14:04:24.134: INFO: (6) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.597558ms) Mar 8 14:04:24.137: INFO: (7) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.706564ms) Mar 8 14:04:24.139: INFO: (8) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.514011ms) Mar 8 14:04:24.142: INFO: (9) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.424522ms) Mar 8 14:04:24.144: INFO: (10) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.631882ms) Mar 8 14:04:24.147: INFO: (11) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.76426ms) Mar 8 14:04:24.152: INFO: (12) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.078106ms) Mar 8 14:04:24.156: INFO: (13) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.172451ms) Mar 8 14:04:24.159: INFO: (14) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.489763ms) Mar 8 14:04:24.162: INFO: (15) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.597664ms) Mar 8 14:04:24.164: INFO: (16) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.271002ms) Mar 8 14:04:24.166: INFO: (17) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.388355ms) Mar 8 14:04:24.168: INFO: (18) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.079477ms) Mar 8 14:04:24.171: INFO: (19) /api/v1/nodes/kind-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.291945ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:04:24.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5954" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":191,"skipped":3197,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:04:24.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0235874d-473d-4302-97a7-f82f3cb56ef4 STEP: Creating a pod to test consume configMaps Mar 8 14:04:24.238: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b4b7d747-b165-4ff1-bd93-a226ea538d9d" in namespace "projected-2830" to be "success or failure" Mar 8 14:04:24.245: INFO: Pod "pod-projected-configmaps-b4b7d747-b165-4ff1-bd93-a226ea538d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328598ms Mar 8 14:04:26.251: INFO: Pod "pod-projected-configmaps-b4b7d747-b165-4ff1-bd93-a226ea538d9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013013206s Mar 8 14:04:28.255: INFO: Pod "pod-projected-configmaps-b4b7d747-b165-4ff1-bd93-a226ea538d9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017118955s STEP: Saw pod success Mar 8 14:04:28.255: INFO: Pod "pod-projected-configmaps-b4b7d747-b165-4ff1-bd93-a226ea538d9d" satisfied condition "success or failure" Mar 8 14:04:28.259: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-b4b7d747-b165-4ff1-bd93-a226ea538d9d container projected-configmap-volume-test: STEP: delete the pod Mar 8 14:04:28.278: INFO: Waiting for pod pod-projected-configmaps-b4b7d747-b165-4ff1-bd93-a226ea538d9d to disappear Mar 8 14:04:28.281: INFO: Pod pod-projected-configmaps-b4b7d747-b165-4ff1-bd93-a226ea538d9d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:04:28.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2830" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3203,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:04:28.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5053.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5053.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5053.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5053.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5053.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5053.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 14:04:42.399: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:42.402: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:42.405: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:42.408: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:42.416: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:42.418: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:42.421: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:42.424: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:42.428: INFO: Lookups using dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local] Mar 8 14:04:47.433: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:47.435: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:47.438: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:47.441: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:47.450: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:47.453: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:47.455: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:47.458: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:47.463: INFO: Lookups using dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local] Mar 8 14:04:52.433: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:52.436: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:52.439: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:52.442: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:52.451: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:52.454: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:52.457: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:52.460: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:52.465: INFO: Lookups using dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local] Mar 8 14:04:57.433: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:57.436: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:57.439: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:57.442: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:57.451: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:57.454: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:57.457: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:57.460: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:04:57.467: INFO: Lookups using dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local] Mar 8 14:05:02.433: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:05:02.436: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:05:02.439: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:05:02.442: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:05:02.450: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:05:02.453: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:05:02.455: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:05:02.458: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local from pod dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e: the server could not find the requested resource (get pods dns-test-9d091ee9-1894-4b60-9734-709434d39b9e) Mar 8 14:05:02.463: INFO: Lookups using dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5053.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5053.svc.cluster.local jessie_udp@dns-test-service-2.dns-5053.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5053.svc.cluster.local] Mar 8 14:05:07.468: INFO: DNS probes using dns-5053/dns-test-9d091ee9-1894-4b60-9734-709434d39b9e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:07.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5053" for this suite. • [SLOW TEST:39.293 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":193,"skipped":3213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:07.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:11.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9945" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:11.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-d8f5ec1e-f422-405b-8b34-a37dd2653319 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d8f5ec1e-f422-405b-8b34-a37dd2653319 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:15.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9676" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3267,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:15.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 14:05:16.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1164024-e22a-40fc-a4d1-b4e986b8a4cc" in namespace "downward-api-8070" to be "success or failure" Mar 8 14:05:16.048: INFO: Pod "downwardapi-volume-f1164024-e22a-40fc-a4d1-b4e986b8a4cc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.304978ms Mar 8 14:05:18.052: INFO: Pod "downwardapi-volume-f1164024-e22a-40fc-a4d1-b4e986b8a4cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020123353s STEP: Saw pod success Mar 8 14:05:18.052: INFO: Pod "downwardapi-volume-f1164024-e22a-40fc-a4d1-b4e986b8a4cc" satisfied condition "success or failure" Mar 8 14:05:18.055: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-f1164024-e22a-40fc-a4d1-b4e986b8a4cc container client-container: STEP: delete the pod Mar 8 14:05:18.131: INFO: Waiting for pod downwardapi-volume-f1164024-e22a-40fc-a4d1-b4e986b8a4cc to disappear Mar 8 14:05:18.137: INFO: Pod downwardapi-volume-f1164024-e22a-40fc-a4d1-b4e986b8a4cc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:18.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8070" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:18.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 8 14:05:18.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-465' Mar 8 14:05:18.561: INFO: stderr: "" Mar 8 14:05:18.561: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 14:05:19.566: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 14:05:19.566: INFO: Found 0 / 1 Mar 8 14:05:20.565: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 14:05:20.565: INFO: Found 1 / 1 Mar 8 14:05:20.565: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 8 14:05:20.568: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 14:05:20.568: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 14:05:20.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-w82zb --namespace=kubectl-465 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 8 14:05:20.706: INFO: stderr: "" Mar 8 14:05:20.706: INFO: stdout: "pod/agnhost-master-w82zb patched\n" STEP: checking annotations Mar 8 14:05:20.710: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 14:05:20.710: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:20.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-465" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":197,"skipped":3321,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:20.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:22.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4630" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3328,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:22.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-j4t8 STEP: Creating a pod to test atomic-volume-subpath Mar 8 14:05:22.904: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-j4t8" in namespace "subpath-6250" to be "success or failure" Mar 8 14:05:22.907: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.570214ms Mar 8 14:05:24.911: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006191491s Mar 8 14:05:26.914: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 4.010047676s Mar 8 14:05:28.919: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 6.014980313s Mar 8 14:05:30.923: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 8.018644999s Mar 8 14:05:32.927: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 10.022554342s Mar 8 14:05:34.931: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 12.026341791s Mar 8 14:05:36.935: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 14.030171695s Mar 8 14:05:38.938: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 16.034070427s Mar 8 14:05:40.942: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 18.037847592s Mar 8 14:05:42.946: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Running", Reason="", readiness=true. Elapsed: 20.041452637s Mar 8 14:05:44.950: INFO: Pod "pod-subpath-test-downwardapi-j4t8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.045785362s STEP: Saw pod success Mar 8 14:05:44.950: INFO: Pod "pod-subpath-test-downwardapi-j4t8" satisfied condition "success or failure" Mar 8 14:05:44.953: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-downwardapi-j4t8 container test-container-subpath-downwardapi-j4t8: STEP: delete the pod Mar 8 14:05:44.978: INFO: Waiting for pod pod-subpath-test-downwardapi-j4t8 to disappear Mar 8 14:05:44.982: INFO: Pod pod-subpath-test-downwardapi-j4t8 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-j4t8 Mar 8 14:05:44.982: INFO: Deleting pod "pod-subpath-test-downwardapi-j4t8" in namespace "subpath-6250" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:44.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6250" for this suite. • [SLOW TEST:22.158 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":199,"skipped":3329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:44.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-be294fc8-307e-44c6-84d4-ef92c4c51e42 STEP: Creating a pod to test consume configMaps Mar 8 14:05:45.093: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9a65b6a2-c671-4a4d-a5be-bbd37af98f28" in namespace "projected-5057" to be "success or failure" Mar 8 14:05:45.109: INFO: Pod "pod-projected-configmaps-9a65b6a2-c671-4a4d-a5be-bbd37af98f28": Phase="Pending", Reason="", readiness=false. Elapsed: 15.868405ms Mar 8 14:05:47.113: INFO: Pod "pod-projected-configmaps-9a65b6a2-c671-4a4d-a5be-bbd37af98f28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019595546s STEP: Saw pod success Mar 8 14:05:47.113: INFO: Pod "pod-projected-configmaps-9a65b6a2-c671-4a4d-a5be-bbd37af98f28" satisfied condition "success or failure" Mar 8 14:05:47.116: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-9a65b6a2-c671-4a4d-a5be-bbd37af98f28 container projected-configmap-volume-test: STEP: delete the pod Mar 8 14:05:47.134: INFO: Waiting for pod pod-projected-configmaps-9a65b6a2-c671-4a4d-a5be-bbd37af98f28 to disappear Mar 8 14:05:47.152: INFO: Pod pod-projected-configmaps-9a65b6a2-c671-4a4d-a5be-bbd37af98f28 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:47.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5057" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:47.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:05:47.280: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c412dc4a-5ae0-430b-832b-7d6228824574", Controller:(*bool)(0xc002388976), BlockOwnerDeletion:(*bool)(0xc002388977)}} Mar 8 14:05:47.290: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"29808767-983c-4073-b3a3-813a923218e8", Controller:(*bool)(0xc002b2f20e), BlockOwnerDeletion:(*bool)(0xc002b2f20f)}} Mar 8 14:05:47.309: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f530e033-49ec-4209-a2fb-d42e2d1f8a3b", Controller:(*bool)(0xc003a39256), BlockOwnerDeletion:(*bool)(0xc003a39257)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:52.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9261" for this suite. • [SLOW TEST:5.169 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":201,"skipped":3397,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:52.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 8 14:05:52.388: INFO: Waiting up to 5m0s for pod "downward-api-f1e21ad8-aef3-4475-a411-0fcb939d9766" in namespace "downward-api-3175" to be "success or failure" Mar 8 14:05:52.404: INFO: Pod "downward-api-f1e21ad8-aef3-4475-a411-0fcb939d9766": Phase="Pending", Reason="", readiness=false. Elapsed: 16.270631ms Mar 8 14:05:54.412: INFO: Pod "downward-api-f1e21ad8-aef3-4475-a411-0fcb939d9766": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023588213s Mar 8 14:05:56.416: INFO: Pod "downward-api-f1e21ad8-aef3-4475-a411-0fcb939d9766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0277044s STEP: Saw pod success Mar 8 14:05:56.416: INFO: Pod "downward-api-f1e21ad8-aef3-4475-a411-0fcb939d9766" satisfied condition "success or failure" Mar 8 14:05:56.420: INFO: Trying to get logs from node kind-worker pod downward-api-f1e21ad8-aef3-4475-a411-0fcb939d9766 container dapi-container: STEP: delete the pod Mar 8 14:05:56.454: INFO: Waiting for pod downward-api-f1e21ad8-aef3-4475-a411-0fcb939d9766 to disappear Mar 8 14:05:56.463: INFO: Pod downward-api-f1e21ad8-aef3-4475-a411-0fcb939d9766 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:05:56.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3175" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3405,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:05:56.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 8 14:05:56.538: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 14:05:56.558: INFO: Waiting for terminating namespaces to be deleted... Mar 8 14:05:56.562: INFO: Logging pods the kubelet thinks is on node kind-worker before test Mar 8 14:05:56.567: INFO: kindnet-p9whg from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 14:05:56.567: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 14:05:56.567: INFO: kube-proxy-pz8tf from kube-system started at 2020-03-08 12:58:54 +0000 UTC (1 container statuses recorded) Mar 8 14:05:56.567: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 14:05:56.567: INFO: Logging pods the kubelet thinks is on node kind-worker2 before test Mar 8 14:05:56.575: INFO: kube-proxy-vfcnx from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 14:05:56.575: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 14:05:56.575: INFO: busybox-host-aliases84ceda6b-4a9d-4fcc-ae54-b7bbb0caaff7 from kubelet-test-4630 started at 2020-03-08 14:05:20 +0000 UTC (1 container statuses recorded) Mar 8 14:05:56.575: INFO: Container busybox-host-aliases84ceda6b-4a9d-4fcc-ae54-b7bbb0caaff7 ready: true, restart count 0 Mar 8 14:05:56.575: INFO: kindnet-mjfxb from kube-system started at 2020-03-08 12:58:53 +0000 UTC (1 container statuses recorded) Mar 8 14:05:56.575: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-090ad2c1-f2ce-4427-bc51-91f6699fe755 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-090ad2c1-f2ce-4427-bc51-91f6699fe755 off the node kind-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-090ad2c1-f2ce-4427-bc51-91f6699fe755 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:06:00.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-888" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":203,"skipped":3422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:06:00.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 8 14:06:00.825: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 8 14:06:12.757: INFO: >>> kubeConfig: /root/.kube/config Mar 8 14:06:16.162: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:06:27.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5209" for this suite. • [SLOW TEST:26.962 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":204,"skipped":3449,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:06:27.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 14:06:31.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 14:06:31.780: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 14:06:33.780: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 14:06:33.784: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 14:06:35.780: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 14:06:35.783: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:06:35.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3313" for this suite. • [SLOW TEST:8.111 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3457,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:06:35.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:06:35.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8256" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":206,"skipped":3477,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:06:35.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 14:06:38.016: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:06:38.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-621" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3490,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:06:38.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9855 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-9855 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9855 Mar 8 14:06:38.153: INFO: Found 0 stateful pods, waiting for 1 Mar 8 14:06:48.157: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 8 14:06:48.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9855 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 14:06:48.436: INFO: stderr: "I0308 14:06:48.338659 2586 log.go:172] (0xc0005ae0b0) (0xc00067dae0) Create stream\nI0308 14:06:48.338718 2586 log.go:172] (0xc0005ae0b0) (0xc00067dae0) Stream added, broadcasting: 1\nI0308 14:06:48.341352 2586 log.go:172] (0xc0005ae0b0) Reply frame received for 1\nI0308 14:06:48.341426 2586 log.go:172] (0xc0005ae0b0) (0xc00067db80) Create stream\nI0308 14:06:48.341456 2586 log.go:172] (0xc0005ae0b0) (0xc00067db80) Stream added, broadcasting: 3\nI0308 14:06:48.342760 2586 log.go:172] (0xc0005ae0b0) Reply frame received for 3\nI0308 14:06:48.342793 2586 log.go:172] (0xc0005ae0b0) (0xc00098c000) Create stream\nI0308 14:06:48.342805 2586 log.go:172] (0xc0005ae0b0) (0xc00098c000) Stream added, broadcasting: 5\nI0308 14:06:48.343862 2586 log.go:172] (0xc0005ae0b0) Reply frame received for 5\nI0308 14:06:48.401074 2586 log.go:172] (0xc0005ae0b0) Data frame received for 5\nI0308 14:06:48.401106 2586 log.go:172] (0xc00098c000) (5) Data frame handling\nI0308 14:06:48.401126 2586 log.go:172] (0xc00098c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 14:06:48.430089 2586 log.go:172] (0xc0005ae0b0) Data frame received for 3\nI0308 14:06:48.430144 2586 log.go:172] (0xc00067db80) (3) Data frame handling\nI0308 14:06:48.430192 2586 log.go:172] (0xc00067db80) (3) Data frame sent\nI0308 14:06:48.430326 2586 log.go:172] (0xc0005ae0b0) Data frame received for 5\nI0308 14:06:48.430348 2586 log.go:172] (0xc00098c000) (5) Data frame handling\nI0308 14:06:48.430697 2586 log.go:172] (0xc0005ae0b0) Data frame received for 3\nI0308 14:06:48.430727 2586 log.go:172] (0xc00067db80) (3) Data frame handling\nI0308 14:06:48.432507 2586 log.go:172] (0xc0005ae0b0) Data frame received for 1\nI0308 14:06:48.432528 2586 log.go:172] (0xc00067dae0) (1) Data frame handling\nI0308 14:06:48.432539 2586 log.go:172] (0xc00067dae0) (1) Data frame sent\nI0308 14:06:48.432551 2586 log.go:172] (0xc0005ae0b0) (0xc00067dae0) Stream removed, broadcasting: 1\nI0308 14:06:48.432578 2586 log.go:172] (0xc0005ae0b0) Go away received\nI0308 14:06:48.432947 2586 log.go:172] (0xc0005ae0b0) (0xc00067dae0) Stream removed, broadcasting: 1\nI0308 14:06:48.432970 2586 log.go:172] (0xc0005ae0b0) (0xc00067db80) Stream removed, broadcasting: 3\nI0308 14:06:48.432981 2586 log.go:172] (0xc0005ae0b0) (0xc00098c000) Stream removed, broadcasting: 5\n" Mar 8 14:06:48.436: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 14:06:48.436: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 14:06:48.440: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 14:06:58.451: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 14:06:58.451: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 14:06:58.474: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:06:58.474: INFO: ss-0 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:38 +0000 UTC }] Mar 8 14:06:58.475: INFO: Mar 8 14:06:58.475: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 8 14:06:59.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986083169s Mar 8 14:07:00.493: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971179957s Mar 8 14:07:01.497: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967089454s Mar 8 14:07:02.508: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.963459234s Mar 8 14:07:03.512: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952995385s Mar 8 14:07:04.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.94884694s Mar 8 14:07:05.520: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.94477201s Mar 8 14:07:06.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.940029083s Mar 8 14:07:07.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.795968ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9855 Mar 8 14:07:08.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9855 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 14:07:08.758: INFO: stderr: "I0308 14:07:08.694807 2606 log.go:172] (0xc00092e6e0) (0xc0003b4500) Create stream\nI0308 14:07:08.694849 2606 log.go:172] (0xc00092e6e0) (0xc0003b4500) Stream added, broadcasting: 1\nI0308 14:07:08.698968 2606 log.go:172] (0xc00092e6e0) Reply frame received for 1\nI0308 14:07:08.698998 2606 log.go:172] (0xc00092e6e0) (0xc0006d7a40) Create stream\nI0308 14:07:08.699008 2606 log.go:172] (0xc00092e6e0) (0xc0006d7a40) Stream added, broadcasting: 3\nI0308 14:07:08.699878 2606 log.go:172] (0xc00092e6e0) Reply frame received for 3\nI0308 14:07:08.699904 2606 log.go:172] (0xc00092e6e0) (0xc000610640) Create stream\nI0308 14:07:08.699920 2606 log.go:172] (0xc00092e6e0) (0xc000610640) Stream added, broadcasting: 5\nI0308 14:07:08.700716 2606 log.go:172] (0xc00092e6e0) Reply frame received for 5\nI0308 14:07:08.753892 2606 log.go:172] (0xc00092e6e0) Data frame received for 5\nI0308 14:07:08.753914 2606 log.go:172] (0xc000610640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0308 14:07:08.753943 2606 log.go:172] (0xc00092e6e0) Data frame received for 3\nI0308 14:07:08.754000 2606 log.go:172] (0xc0006d7a40) (3) Data frame handling\nI0308 14:07:08.754017 2606 log.go:172] (0xc000610640) (5) Data frame sent\nI0308 14:07:08.754049 2606 log.go:172] (0xc00092e6e0) Data frame received for 5\nI0308 14:07:08.754056 2606 log.go:172] (0xc000610640) (5) Data frame handling\nI0308 14:07:08.754070 2606 log.go:172] (0xc0006d7a40) (3) Data frame sent\nI0308 14:07:08.754085 2606 log.go:172] (0xc00092e6e0) Data frame received for 3\nI0308 14:07:08.754092 2606 log.go:172] (0xc0006d7a40) (3) Data frame handling\nI0308 14:07:08.755602 2606 log.go:172] (0xc00092e6e0) Data frame received for 1\nI0308 14:07:08.755623 2606 log.go:172] (0xc0003b4500) (1) Data frame handling\nI0308 14:07:08.755632 2606 log.go:172] (0xc0003b4500) (1) Data frame sent\nI0308 14:07:08.755641 2606 log.go:172] (0xc00092e6e0) (0xc0003b4500) Stream removed, broadcasting: 1\nI0308 14:07:08.755913 2606 log.go:172] (0xc00092e6e0) (0xc0003b4500) Stream removed, broadcasting: 1\nI0308 14:07:08.755929 2606 log.go:172] (0xc00092e6e0) (0xc0006d7a40) Stream removed, broadcasting: 3\nI0308 14:07:08.755937 2606 log.go:172] (0xc00092e6e0) (0xc000610640) Stream removed, broadcasting: 5\n" Mar 8 14:07:08.758: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 14:07:08.758: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 14:07:08.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9855 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 14:07:08.941: INFO: stderr: "I0308 14:07:08.880666 2627 log.go:172] (0xc000a27600) (0xc000a22780) Create stream\nI0308 14:07:08.880712 2627 log.go:172] (0xc000a27600) (0xc000a22780) Stream added, broadcasting: 1\nI0308 14:07:08.883608 2627 log.go:172] (0xc000a27600) Reply frame received for 1\nI0308 14:07:08.883638 2627 log.go:172] (0xc000a27600) (0xc000a22000) Create stream\nI0308 14:07:08.883651 2627 log.go:172] (0xc000a27600) (0xc000a22000) Stream added, broadcasting: 3\nI0308 14:07:08.884343 2627 log.go:172] (0xc000a27600) Reply frame received for 3\nI0308 14:07:08.884382 2627 log.go:172] (0xc000a27600) (0xc0005d06e0) Create stream\nI0308 14:07:08.884397 2627 log.go:172] (0xc000a27600) (0xc0005d06e0) Stream added, broadcasting: 5\nI0308 14:07:08.885056 2627 log.go:172] (0xc000a27600) Reply frame received for 5\nI0308 14:07:08.937193 2627 log.go:172] (0xc000a27600) Data frame received for 3\nI0308 14:07:08.937207 2627 log.go:172] (0xc000a22000) (3) Data frame handling\nI0308 14:07:08.937229 2627 log.go:172] (0xc000a22000) (3) Data frame sent\nI0308 14:07:08.937240 2627 log.go:172] (0xc000a27600) Data frame received for 3\nI0308 14:07:08.937245 2627 log.go:172] (0xc000a22000) (3) Data frame handling\nI0308 14:07:08.937466 2627 log.go:172] (0xc000a27600) Data frame received for 5\nI0308 14:07:08.937482 2627 log.go:172] (0xc0005d06e0) (5) Data frame handling\nI0308 14:07:08.937505 2627 log.go:172] (0xc0005d06e0) (5) Data frame sent\nI0308 14:07:08.937515 2627 log.go:172] (0xc000a27600) Data frame received for 5\nI0308 14:07:08.937525 2627 log.go:172] (0xc0005d06e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0308 14:07:08.938923 2627 log.go:172] (0xc000a27600) Data frame received for 1\nI0308 14:07:08.938944 2627 log.go:172] (0xc000a22780) (1) Data frame handling\nI0308 14:07:08.938969 2627 log.go:172] (0xc000a22780) (1) Data frame sent\nI0308 14:07:08.938990 2627 log.go:172] (0xc000a27600) (0xc000a22780) Stream removed, broadcasting: 1\nI0308 14:07:08.939007 2627 log.go:172] (0xc000a27600) Go away received\nI0308 14:07:08.939226 2627 log.go:172] (0xc000a27600) (0xc000a22780) Stream removed, broadcasting: 1\nI0308 14:07:08.939239 2627 log.go:172] (0xc000a27600) (0xc000a22000) Stream removed, broadcasting: 3\nI0308 14:07:08.939244 2627 log.go:172] (0xc000a27600) (0xc0005d06e0) Stream removed, broadcasting: 5\n" Mar 8 14:07:08.941: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 14:07:08.941: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 14:07:08.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9855 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 8 14:07:09.127: INFO: stderr: "I0308 14:07:09.069399 2649 log.go:172] (0xc000104370) (0xc00068fa40) Create stream\nI0308 14:07:09.069443 2649 log.go:172] (0xc000104370) (0xc00068fa40) Stream added, broadcasting: 1\nI0308 14:07:09.071062 2649 log.go:172] (0xc000104370) Reply frame received for 1\nI0308 14:07:09.071097 2649 log.go:172] (0xc000104370) (0xc00062c640) Create stream\nI0308 14:07:09.071107 2649 log.go:172] (0xc000104370) (0xc00062c640) Stream added, broadcasting: 3\nI0308 14:07:09.071902 2649 log.go:172] (0xc000104370) Reply frame received for 3\nI0308 14:07:09.071931 2649 log.go:172] (0xc000104370) (0xc0008ee000) Create stream\nI0308 14:07:09.071939 2649 log.go:172] (0xc000104370) (0xc0008ee000) Stream added, broadcasting: 5\nI0308 14:07:09.072637 2649 log.go:172] (0xc000104370) Reply frame received for 5\nI0308 14:07:09.123366 2649 log.go:172] (0xc000104370) Data frame received for 3\nI0308 14:07:09.123389 2649 log.go:172] (0xc00062c640) (3) Data frame handling\nI0308 14:07:09.123398 2649 log.go:172] (0xc00062c640) (3) Data frame sent\nI0308 14:07:09.123421 2649 log.go:172] (0xc000104370) Data frame received for 5\nI0308 14:07:09.123447 2649 log.go:172] (0xc0008ee000) (5) Data frame handling\nI0308 14:07:09.123464 2649 log.go:172] (0xc0008ee000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0308 14:07:09.123478 2649 log.go:172] (0xc000104370) Data frame received for 5\nI0308 14:07:09.123491 2649 log.go:172] (0xc0008ee000) (5) Data frame handling\nI0308 14:07:09.123526 2649 log.go:172] (0xc000104370) Data frame received for 3\nI0308 14:07:09.123551 2649 log.go:172] (0xc00062c640) (3) Data frame handling\nI0308 14:07:09.124651 2649 log.go:172] (0xc000104370) Data frame received for 1\nI0308 14:07:09.124676 2649 log.go:172] (0xc00068fa40) (1) Data frame handling\nI0308 14:07:09.124696 2649 log.go:172] (0xc00068fa40) (1) Data frame sent\nI0308 14:07:09.124715 2649 log.go:172] (0xc000104370) (0xc00068fa40) Stream removed, broadcasting: 1\nI0308 14:07:09.124742 2649 log.go:172] (0xc000104370) Go away received\nI0308 14:07:09.125033 2649 log.go:172] (0xc000104370) (0xc00068fa40) Stream removed, broadcasting: 1\nI0308 14:07:09.125050 2649 log.go:172] (0xc000104370) (0xc00062c640) Stream removed, broadcasting: 3\nI0308 14:07:09.125058 2649 log.go:172] (0xc000104370) (0xc0008ee000) Stream removed, broadcasting: 5\n" Mar 8 14:07:09.127: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 8 14:07:09.127: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 8 14:07:09.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 8 14:07:19.135: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 14:07:19.135: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 14:07:19.135: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 8 14:07:19.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9855 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 14:07:19.377: INFO: stderr: "I0308 14:07:19.301908 2669 log.go:172] (0xc0009fa000) (0xc00021de00) Create stream\nI0308 14:07:19.301964 2669 log.go:172] (0xc0009fa000) (0xc00021de00) Stream added, broadcasting: 1\nI0308 14:07:19.304477 2669 log.go:172] (0xc0009fa000) Reply frame received for 1\nI0308 14:07:19.304508 2669 log.go:172] (0xc0009fa000) (0xc00021dea0) Create stream\nI0308 14:07:19.304520 2669 log.go:172] (0xc0009fa000) (0xc00021dea0) Stream added, broadcasting: 3\nI0308 14:07:19.305659 2669 log.go:172] (0xc0009fa000) Reply frame received for 3\nI0308 14:07:19.305695 2669 log.go:172] (0xc0009fa000) (0xc00021df40) Create stream\nI0308 14:07:19.305705 2669 log.go:172] (0xc0009fa000) (0xc00021df40) Stream added, broadcasting: 5\nI0308 14:07:19.306719 2669 log.go:172] (0xc0009fa000) Reply frame received for 5\nI0308 14:07:19.373052 2669 log.go:172] (0xc0009fa000) Data frame received for 3\nI0308 14:07:19.373086 2669 log.go:172] (0xc0009fa000) Data frame received for 5\nI0308 14:07:19.373117 2669 log.go:172] (0xc00021df40) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 14:07:19.373146 2669 log.go:172] (0xc00021dea0) (3) Data frame handling\nI0308 14:07:19.373171 2669 log.go:172] (0xc00021dea0) (3) Data frame sent\nI0308 14:07:19.373190 2669 log.go:172] (0xc00021df40) (5) Data frame sent\nI0308 14:07:19.373199 2669 log.go:172] (0xc0009fa000) Data frame received for 5\nI0308 14:07:19.373210 2669 log.go:172] (0xc00021df40) (5) Data frame handling\nI0308 14:07:19.373348 2669 log.go:172] (0xc0009fa000) Data frame received for 3\nI0308 14:07:19.373369 2669 log.go:172] (0xc00021dea0) (3) Data frame handling\nI0308 14:07:19.374636 2669 log.go:172] (0xc0009fa000) Data frame received for 1\nI0308 14:07:19.374664 2669 log.go:172] (0xc00021de00) (1) Data frame handling\nI0308 14:07:19.374686 2669 log.go:172] (0xc00021de00) (1) Data frame sent\nI0308 14:07:19.374706 2669 log.go:172] (0xc0009fa000) (0xc00021de00) Stream removed, broadcasting: 1\nI0308 14:07:19.374727 2669 log.go:172] (0xc0009fa000) Go away received\nI0308 14:07:19.375059 2669 log.go:172] (0xc0009fa000) (0xc00021de00) Stream removed, broadcasting: 1\nI0308 14:07:19.375078 2669 log.go:172] (0xc0009fa000) (0xc00021dea0) Stream removed, broadcasting: 3\nI0308 14:07:19.375088 2669 log.go:172] (0xc0009fa000) (0xc00021df40) Stream removed, broadcasting: 5\n" Mar 8 14:07:19.377: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 14:07:19.377: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 14:07:19.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9855 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 14:07:19.592: INFO: stderr: "I0308 14:07:19.507959 2689 log.go:172] (0xc0000f5290) (0xc000703d60) Create stream\nI0308 14:07:19.508021 2689 log.go:172] (0xc0000f5290) (0xc000703d60) Stream added, broadcasting: 1\nI0308 14:07:19.510025 2689 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0308 14:07:19.510075 2689 log.go:172] (0xc0000f5290) (0xc00074b4a0) Create stream\nI0308 14:07:19.510089 2689 log.go:172] (0xc0000f5290) (0xc00074b4a0) Stream added, broadcasting: 3\nI0308 14:07:19.510998 2689 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0308 14:07:19.511027 2689 log.go:172] (0xc0000f5290) (0xc00074b540) Create stream\nI0308 14:07:19.511038 2689 log.go:172] (0xc0000f5290) (0xc00074b540) Stream added, broadcasting: 5\nI0308 14:07:19.511832 2689 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0308 14:07:19.561517 2689 log.go:172] (0xc0000f5290) Data frame received for 5\nI0308 14:07:19.561544 2689 log.go:172] (0xc00074b540) (5) Data frame handling\nI0308 14:07:19.561568 2689 log.go:172] (0xc00074b540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 14:07:19.588605 2689 log.go:172] (0xc0000f5290) Data frame received for 3\nI0308 14:07:19.588632 2689 log.go:172] (0xc00074b4a0) (3) Data frame handling\nI0308 14:07:19.588652 2689 log.go:172] (0xc00074b4a0) (3) Data frame sent\nI0308 14:07:19.588791 2689 log.go:172] (0xc0000f5290) Data frame received for 5\nI0308 14:07:19.588805 2689 log.go:172] (0xc00074b540) (5) Data frame handling\nI0308 14:07:19.588992 2689 log.go:172] (0xc0000f5290) Data frame received for 3\nI0308 14:07:19.589012 2689 log.go:172] (0xc00074b4a0) (3) Data frame handling\nI0308 14:07:19.590401 2689 log.go:172] (0xc0000f5290) Data frame received for 1\nI0308 14:07:19.590430 2689 log.go:172] (0xc000703d60) (1) Data frame handling\nI0308 14:07:19.590450 2689 log.go:172] (0xc000703d60) (1) Data frame sent\nI0308 14:07:19.590462 2689 log.go:172] (0xc0000f5290) (0xc000703d60) Stream removed, broadcasting: 1\nI0308 14:07:19.590471 2689 log.go:172] (0xc0000f5290) Go away received\nI0308 14:07:19.590914 2689 log.go:172] (0xc0000f5290) (0xc000703d60) Stream removed, broadcasting: 1\nI0308 14:07:19.590932 2689 log.go:172] (0xc0000f5290) (0xc00074b4a0) Stream removed, broadcasting: 3\nI0308 14:07:19.590939 2689 log.go:172] (0xc0000f5290) (0xc00074b540) Stream removed, broadcasting: 5\n" Mar 8 14:07:19.593: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 14:07:19.593: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 14:07:19.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9855 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 8 14:07:19.821: INFO: stderr: "I0308 14:07:19.722244 2710 log.go:172] (0xc0004e8790) (0xc0006c79a0) Create stream\nI0308 14:07:19.722296 2710 log.go:172] (0xc0004e8790) (0xc0006c79a0) Stream added, broadcasting: 1\nI0308 14:07:19.724398 2710 log.go:172] (0xc0004e8790) Reply frame received for 1\nI0308 14:07:19.724427 2710 log.go:172] (0xc0004e8790) (0xc000a24000) Create stream\nI0308 14:07:19.724435 2710 log.go:172] (0xc0004e8790) (0xc000a24000) Stream added, broadcasting: 3\nI0308 14:07:19.725182 2710 log.go:172] (0xc0004e8790) Reply frame received for 3\nI0308 14:07:19.725205 2710 log.go:172] (0xc0004e8790) (0xc000942000) Create stream\nI0308 14:07:19.725213 2710 log.go:172] (0xc0004e8790) (0xc000942000) Stream added, broadcasting: 5\nI0308 14:07:19.725854 2710 log.go:172] (0xc0004e8790) Reply frame received for 5\nI0308 14:07:19.791317 2710 log.go:172] (0xc0004e8790) Data frame received for 5\nI0308 14:07:19.791337 2710 log.go:172] (0xc000942000) (5) Data frame handling\nI0308 14:07:19.791346 2710 log.go:172] (0xc000942000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0308 14:07:19.816720 2710 log.go:172] (0xc0004e8790) Data frame received for 5\nI0308 14:07:19.816761 2710 log.go:172] (0xc000942000) (5) Data frame handling\nI0308 14:07:19.816783 2710 log.go:172] (0xc0004e8790) Data frame received for 3\nI0308 14:07:19.816804 2710 log.go:172] (0xc000a24000) (3) Data frame handling\nI0308 14:07:19.816815 2710 log.go:172] (0xc000a24000) (3) Data frame sent\nI0308 14:07:19.816822 2710 log.go:172] (0xc0004e8790) Data frame received for 3\nI0308 14:07:19.816827 2710 log.go:172] (0xc000a24000) (3) Data frame handling\nI0308 14:07:19.818491 2710 log.go:172] (0xc0004e8790) Data frame received for 1\nI0308 14:07:19.818513 2710 log.go:172] (0xc0006c79a0) (1) Data frame handling\nI0308 14:07:19.818527 2710 log.go:172] (0xc0006c79a0) (1) Data frame sent\nI0308 14:07:19.818548 2710 log.go:172] (0xc0004e8790) (0xc0006c79a0) Stream removed, broadcasting: 1\nI0308 14:07:19.818565 2710 log.go:172] (0xc0004e8790) Go away received\nI0308 14:07:19.818903 2710 log.go:172] (0xc0004e8790) (0xc0006c79a0) Stream removed, broadcasting: 1\nI0308 14:07:19.818927 2710 log.go:172] (0xc0004e8790) (0xc000a24000) Stream removed, broadcasting: 3\nI0308 14:07:19.818939 2710 log.go:172] (0xc0004e8790) (0xc000942000) Stream removed, broadcasting: 5\n" Mar 8 14:07:19.821: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 8 14:07:19.821: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 8 14:07:19.821: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 14:07:19.824: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 8 14:07:29.841: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 14:07:29.842: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 14:07:29.842: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 14:07:29.865: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:29.865: INFO: ss-0 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:38 +0000 UTC }] Mar 8 14:07:29.866: INFO: ss-1 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:29.866: INFO: ss-2 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:29.866: INFO: Mar 8 14:07:29.866: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:07:30.871: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:30.871: INFO: ss-0 kind-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:38 +0000 UTC }] Mar 8 14:07:30.871: INFO: ss-1 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:30.871: INFO: ss-2 kind-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:30.871: INFO: Mar 8 14:07:30.871: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:07:31.874: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:31.874: INFO: ss-1 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:31.874: INFO: Mar 8 14:07:31.874: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 8 14:07:32.879: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:32.879: INFO: ss-1 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:32.879: INFO: Mar 8 14:07:32.879: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 8 14:07:33.883: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:33.883: INFO: ss-1 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:33.883: INFO: Mar 8 14:07:33.883: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 8 14:07:34.887: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:34.887: INFO: ss-1 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:34.887: INFO: Mar 8 14:07:34.887: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 8 14:07:35.891: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:35.891: INFO: ss-1 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:35.891: INFO: Mar 8 14:07:35.891: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 8 14:07:36.896: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:36.896: INFO: ss-1 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:36.896: INFO: Mar 8 14:07:36.896: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 8 14:07:37.900: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:07:37.900: INFO: ss-1 kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:07:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:06:58 +0000 UTC }] Mar 8 14:07:37.901: INFO: Mar 8 14:07:37.901: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 8 14:07:38.904: INFO: Verifying statefulset ss doesn't scale past 0 for another 957.579058ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9855 Mar 8 14:07:39.909: INFO: Scaling statefulset ss to 0 Mar 8 14:07:39.919: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 8 14:07:39.927: INFO: Deleting all statefulset in ns statefulset-9855 Mar 8 14:07:39.930: INFO: Scaling statefulset ss to 0 Mar 8 14:07:39.953: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 14:07:39.955: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:07:39.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9855" for this suite. • [SLOW TEST:61.909 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":208,"skipped":3500,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:07:39.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 8 14:07:40.039: INFO: namespace kubectl-8030 Mar 8 14:07:40.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8030' Mar 8 14:07:40.420: INFO: stderr: "" Mar 8 14:07:40.420: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 8 14:07:41.424: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 14:07:41.424: INFO: Found 0 / 1 Mar 8 14:07:42.424: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 14:07:42.424: INFO: Found 1 / 1 Mar 8 14:07:42.424: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 14:07:42.427: INFO: Selector matched 1 pods for map[app:agnhost] Mar 8 14:07:42.427: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 14:07:42.427: INFO: wait on agnhost-master startup in kubectl-8030 Mar 8 14:07:42.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-56vcf agnhost-master --namespace=kubectl-8030' Mar 8 14:07:42.579: INFO: stderr: "" Mar 8 14:07:42.579: INFO: stdout: "Paused\n" STEP: exposing RC Mar 8 14:07:42.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8030' Mar 8 14:07:42.727: INFO: stderr: "" Mar 8 14:07:42.727: INFO: stdout: "service/rm2 exposed\n" Mar 8 14:07:42.752: INFO: Service rm2 in namespace kubectl-8030 found. STEP: exposing service Mar 8 14:07:44.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8030' Mar 8 14:07:44.906: INFO: stderr: "" Mar 8 14:07:44.906: INFO: stdout: "service/rm3 exposed\n" Mar 8 14:07:44.909: INFO: Service rm3 in namespace kubectl-8030 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:07:46.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8030" for this suite. • [SLOW TEST:6.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":209,"skipped":3504,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:07:46.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:07:58.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2223" for this suite. • [SLOW TEST:11.128 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":210,"skipped":3509,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:07:58.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 8 14:07:58.117: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 8 14:07:58.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6145' Mar 8 14:07:58.394: INFO: stderr: "" Mar 8 14:07:58.394: INFO: stdout: "service/agnhost-slave created\n" Mar 8 14:07:58.394: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 8 14:07:58.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6145' Mar 8 14:07:58.645: INFO: stderr: "" Mar 8 14:07:58.645: INFO: stdout: "service/agnhost-master created\n" Mar 8 14:07:58.645: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 8 14:07:58.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6145' Mar 8 14:07:58.920: INFO: stderr: "" Mar 8 14:07:58.920: INFO: stdout: "service/frontend created\n" Mar 8 14:07:58.921: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 8 14:07:58.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6145' Mar 8 14:07:59.158: INFO: stderr: "" Mar 8 14:07:59.158: INFO: stdout: "deployment.apps/frontend created\n" Mar 8 14:07:59.158: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 14:07:59.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6145' Mar 8 14:07:59.422: INFO: stderr: "" Mar 8 14:07:59.422: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 8 14:07:59.422: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 14:07:59.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6145' Mar 8 14:07:59.654: INFO: stderr: "" Mar 8 14:07:59.654: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 8 14:07:59.654: INFO: Waiting for all frontend pods to be Running. Mar 8 14:08:04.705: INFO: Waiting for frontend to serve content. Mar 8 14:08:04.715: INFO: Trying to add a new entry to the guestbook. Mar 8 14:08:04.728: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 8 14:08:04.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6145' Mar 8 14:08:04.909: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 14:08:04.909: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 8 14:08:04.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6145' Mar 8 14:08:05.029: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 14:08:05.029: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 14:08:05.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6145' Mar 8 14:08:05.125: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 14:08:05.125: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 14:08:05.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6145' Mar 8 14:08:05.196: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 14:08:05.196: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 14:08:05.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6145' Mar 8 14:08:05.276: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 14:08:05.276: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 14:08:05.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6145' Mar 8 14:08:05.349: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 14:08:05.349: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:08:05.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6145" for this suite. • [SLOW TEST:7.318 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":211,"skipped":3514,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:08:05.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 14:08:05.479: INFO: Waiting up to 5m0s for pod "pod-382337d4-235a-4d15-84d8-1e75418a70db" in namespace "emptydir-5202" to be "success or failure" Mar 8 14:08:05.484: INFO: Pod "pod-382337d4-235a-4d15-84d8-1e75418a70db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.917545ms Mar 8 14:08:07.488: INFO: Pod "pod-382337d4-235a-4d15-84d8-1e75418a70db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008440858s Mar 8 14:08:09.491: INFO: Pod "pod-382337d4-235a-4d15-84d8-1e75418a70db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011825776s STEP: Saw pod success Mar 8 14:08:09.491: INFO: Pod "pod-382337d4-235a-4d15-84d8-1e75418a70db" satisfied condition "success or failure" Mar 8 14:08:09.494: INFO: Trying to get logs from node kind-worker pod pod-382337d4-235a-4d15-84d8-1e75418a70db container test-container: STEP: delete the pod Mar 8 14:08:09.545: INFO: Waiting for pod pod-382337d4-235a-4d15-84d8-1e75418a70db to disappear Mar 8 14:08:09.551: INFO: Pod pod-382337d4-235a-4d15-84d8-1e75418a70db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:08:09.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5202" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3520,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:08:09.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:08:09.612: INFO: Creating ReplicaSet my-hostname-basic-8f3f0cd9-37d9-4d4c-9114-d62ac4fe41b8 Mar 8 14:08:09.637: INFO: Pod name my-hostname-basic-8f3f0cd9-37d9-4d4c-9114-d62ac4fe41b8: Found 0 pods out of 1 Mar 8 14:08:14.641: INFO: Pod name my-hostname-basic-8f3f0cd9-37d9-4d4c-9114-d62ac4fe41b8: Found 1 pods out of 1 Mar 8 14:08:14.641: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8f3f0cd9-37d9-4d4c-9114-d62ac4fe41b8" is running Mar 8 14:08:14.643: INFO: Pod "my-hostname-basic-8f3f0cd9-37d9-4d4c-9114-d62ac4fe41b8-vz9bz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 14:08:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 14:08:11 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 14:08:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 14:08:09 +0000 UTC Reason: Message:}]) Mar 8 14:08:14.643: INFO: Trying to dial the pod Mar 8 14:08:19.654: INFO: Controller my-hostname-basic-8f3f0cd9-37d9-4d4c-9114-d62ac4fe41b8: Got expected result from replica 1 [my-hostname-basic-8f3f0cd9-37d9-4d4c-9114-d62ac4fe41b8-vz9bz]: "my-hostname-basic-8f3f0cd9-37d9-4d4c-9114-d62ac4fe41b8-vz9bz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:08:19.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-939" for this suite. • [SLOW TEST:10.102 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":213,"skipped":3534,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:08:19.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6852 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6852 STEP: creating replication controller externalsvc in namespace services-6852 I0308 14:08:19.771207 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6852, replica count: 2 I0308 14:08:22.821621 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 8 14:08:22.858: INFO: Creating new exec pod Mar 8 14:08:24.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6852 execpod2f7gn -- /bin/sh -x -c nslookup clusterip-service' Mar 8 14:08:26.697: INFO: stderr: "I0308 14:08:26.604496 3065 log.go:172] (0xc000768000) (0xc00078c000) Create stream\nI0308 14:08:26.604539 3065 log.go:172] (0xc000768000) (0xc00078c000) Stream added, broadcasting: 1\nI0308 14:08:26.607213 3065 log.go:172] (0xc000768000) Reply frame received for 1\nI0308 14:08:26.607262 3065 log.go:172] (0xc000768000) (0xc00081c000) Create stream\nI0308 14:08:26.607282 3065 log.go:172] (0xc000768000) (0xc00081c000) Stream added, broadcasting: 3\nI0308 14:08:26.608202 3065 log.go:172] (0xc000768000) Reply frame received for 3\nI0308 14:08:26.608236 3065 log.go:172] (0xc000768000) (0xc00078c0a0) Create stream\nI0308 14:08:26.608248 3065 log.go:172] (0xc000768000) (0xc00078c0a0) Stream added, broadcasting: 5\nI0308 14:08:26.609177 3065 log.go:172] (0xc000768000) Reply frame received for 5\nI0308 14:08:26.681265 3065 log.go:172] (0xc000768000) Data frame received for 5\nI0308 14:08:26.681292 3065 log.go:172] (0xc00078c0a0) (5) Data frame handling\nI0308 14:08:26.681311 3065 log.go:172] (0xc00078c0a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0308 14:08:26.690458 3065 log.go:172] (0xc000768000) Data frame received for 3\nI0308 14:08:26.690478 3065 log.go:172] (0xc00081c000) (3) Data frame handling\nI0308 14:08:26.690493 3065 log.go:172] (0xc00081c000) (3) Data frame sent\nI0308 14:08:26.691964 3065 log.go:172] (0xc000768000) Data frame received for 3\nI0308 14:08:26.691992 3065 log.go:172] (0xc00081c000) (3) Data frame handling\nI0308 14:08:26.692012 3065 log.go:172] (0xc00081c000) (3) Data frame sent\nI0308 14:08:26.692718 3065 log.go:172] (0xc000768000) Data frame received for 3\nI0308 14:08:26.692733 3065 log.go:172] (0xc00081c000) (3) Data frame handling\nI0308 14:08:26.692761 3065 log.go:172] (0xc000768000) Data frame received for 5\nI0308 14:08:26.692783 3065 log.go:172] (0xc00078c0a0) (5) Data frame handling\nI0308 14:08:26.694471 3065 log.go:172] (0xc000768000) Data frame received for 1\nI0308 14:08:26.694499 3065 log.go:172] (0xc00078c000) (1) Data frame handling\nI0308 14:08:26.694510 3065 log.go:172] (0xc00078c000) (1) Data frame sent\nI0308 14:08:26.694525 3065 log.go:172] (0xc000768000) (0xc00078c000) Stream removed, broadcasting: 1\nI0308 14:08:26.694551 3065 log.go:172] (0xc000768000) Go away received\nI0308 14:08:26.694910 3065 log.go:172] (0xc000768000) (0xc00078c000) Stream removed, broadcasting: 1\nI0308 14:08:26.694929 3065 log.go:172] (0xc000768000) (0xc00081c000) Stream removed, broadcasting: 3\nI0308 14:08:26.694938 3065 log.go:172] (0xc000768000) (0xc00078c0a0) Stream removed, broadcasting: 5\n" Mar 8 14:08:26.697: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6852.svc.cluster.local\tcanonical name = externalsvc.services-6852.svc.cluster.local.\nName:\texternalsvc.services-6852.svc.cluster.local\nAddress: 10.96.174.116\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6852, will wait for the garbage collector to delete the pods Mar 8 14:08:26.756: INFO: Deleting ReplicationController externalsvc took: 5.264465ms Mar 8 14:08:26.857: INFO: Terminating ReplicationController externalsvc pods took: 100.22257ms Mar 8 14:08:39.473: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:08:39.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6852" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:19.832 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":214,"skipped":3540,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:08:39.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 8 14:08:39.571: INFO: Waiting up to 5m0s for pod "pod-d0de001b-0fa8-4eba-8321-50317548511b" in namespace "emptydir-4022" to be "success or failure" Mar 8 14:08:39.579: INFO: Pod "pod-d0de001b-0fa8-4eba-8321-50317548511b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548677ms Mar 8 14:08:41.583: INFO: Pod "pod-d0de001b-0fa8-4eba-8321-50317548511b": Phase="Running", Reason="", readiness=true. Elapsed: 2.012597612s Mar 8 14:08:43.588: INFO: Pod "pod-d0de001b-0fa8-4eba-8321-50317548511b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017283039s STEP: Saw pod success Mar 8 14:08:43.588: INFO: Pod "pod-d0de001b-0fa8-4eba-8321-50317548511b" satisfied condition "success or failure" Mar 8 14:08:43.591: INFO: Trying to get logs from node kind-worker2 pod pod-d0de001b-0fa8-4eba-8321-50317548511b container test-container: STEP: delete the pod Mar 8 14:08:43.643: INFO: Waiting for pod pod-d0de001b-0fa8-4eba-8321-50317548511b to disappear Mar 8 14:08:43.651: INFO: Pod pod-d0de001b-0fa8-4eba-8321-50317548511b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:08:43.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4022" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3547,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:08:43.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Mar 8 14:08:43.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-442' Mar 8 14:08:43.991: INFO: stderr: "" Mar 8 14:08:43.991: INFO: stdout: "pod/pause created\n" Mar 8 14:08:43.991: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 8 14:08:43.992: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-442" to be "running and ready" Mar 8 14:08:44.018: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 25.989173ms Mar 8 14:08:46.021: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.029481486s Mar 8 14:08:46.021: INFO: Pod "pause" satisfied condition "running and ready" Mar 8 14:08:46.021: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 8 14:08:46.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-442' Mar 8 14:08:46.165: INFO: stderr: "" Mar 8 14:08:46.165: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 8 14:08:46.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-442' Mar 8 14:08:46.272: INFO: stderr: "" Mar 8 14:08:46.272: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 8 14:08:46.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-442' Mar 8 14:08:46.356: INFO: stderr: "" Mar 8 14:08:46.356: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 8 14:08:46.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-442' Mar 8 14:08:46.433: INFO: stderr: "" Mar 8 14:08:46.434: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Mar 8 14:08:46.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-442' Mar 8 14:08:46.555: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 14:08:46.555: INFO: stdout: "pod \"pause\" force deleted\n" Mar 8 14:08:46.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-442' Mar 8 14:08:46.663: INFO: stderr: "No resources found in kubectl-442 namespace.\n" Mar 8 14:08:46.663: INFO: stdout: "" Mar 8 14:08:46.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-442 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 14:08:46.740: INFO: stderr: "" Mar 8 14:08:46.740: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:08:46.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-442" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":216,"skipped":3559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:08:46.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:08:59.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1346" for this suite. • [SLOW TEST:13.152 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":217,"skipped":3602,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:08:59.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 8 14:09:00.000: INFO: Waiting up to 5m0s for pod "var-expansion-2fba1010-d39a-41cc-a904-1fc44accc547" in namespace "var-expansion-8288" to be "success or failure" Mar 8 14:09:00.005: INFO: Pod "var-expansion-2fba1010-d39a-41cc-a904-1fc44accc547": Phase="Pending", Reason="", readiness=false. Elapsed: 5.211507ms Mar 8 14:09:02.009: INFO: Pod "var-expansion-2fba1010-d39a-41cc-a904-1fc44accc547": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008975614s STEP: Saw pod success Mar 8 14:09:02.009: INFO: Pod "var-expansion-2fba1010-d39a-41cc-a904-1fc44accc547" satisfied condition "success or failure" Mar 8 14:09:02.011: INFO: Trying to get logs from node kind-worker pod var-expansion-2fba1010-d39a-41cc-a904-1fc44accc547 container dapi-container: STEP: delete the pod Mar 8 14:09:02.079: INFO: Waiting for pod var-expansion-2fba1010-d39a-41cc-a904-1fc44accc547 to disappear Mar 8 14:09:02.083: INFO: Pod var-expansion-2fba1010-d39a-41cc-a904-1fc44accc547 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:09:02.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8288" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3621,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:09:02.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-62bc10c6-8251-4b82-8c41-9822dc8ce68b STEP: Creating a pod to test consume secrets Mar 8 14:09:02.168: INFO: Waiting up to 5m0s for pod "pod-secrets-04e735c3-68bc-4c9a-90ef-fe5c4845037e" in namespace "secrets-8155" to be "success or failure" Mar 8 14:09:02.209: INFO: Pod "pod-secrets-04e735c3-68bc-4c9a-90ef-fe5c4845037e": Phase="Pending", Reason="", readiness=false. Elapsed: 40.882832ms Mar 8 14:09:04.213: INFO: Pod "pod-secrets-04e735c3-68bc-4c9a-90ef-fe5c4845037e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044824322s STEP: Saw pod success Mar 8 14:09:04.213: INFO: Pod "pod-secrets-04e735c3-68bc-4c9a-90ef-fe5c4845037e" satisfied condition "success or failure" Mar 8 14:09:04.216: INFO: Trying to get logs from node kind-worker pod pod-secrets-04e735c3-68bc-4c9a-90ef-fe5c4845037e container secret-volume-test: STEP: delete the pod Mar 8 14:09:04.236: INFO: Waiting for pod pod-secrets-04e735c3-68bc-4c9a-90ef-fe5c4845037e to disappear Mar 8 14:09:04.239: INFO: Pod pod-secrets-04e735c3-68bc-4c9a-90ef-fe5c4845037e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:09:04.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8155" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3657,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:09:04.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 8 14:09:06.348: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:09:06.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3080" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3667,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:09:06.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 14:09:07.191: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 14:09:10.240: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:09:10.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3766" for this suite. STEP: Destroying namespace "webhook-3766-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":221,"skipped":3684,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:09:10.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-310ba9f2-7bfc-4f1d-a36d-7f6f88834194 STEP: Creating a pod to test consume configMaps Mar 8 14:09:10.438: INFO: Waiting up to 5m0s for pod "pod-configmaps-ebd06923-e374-4c02-9dc2-b6d59a5c9da1" in namespace "configmap-9022" to be "success or failure" Mar 8 14:09:10.457: INFO: Pod "pod-configmaps-ebd06923-e374-4c02-9dc2-b6d59a5c9da1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.073258ms Mar 8 14:09:12.460: INFO: Pod "pod-configmaps-ebd06923-e374-4c02-9dc2-b6d59a5c9da1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022711629s STEP: Saw pod success Mar 8 14:09:12.460: INFO: Pod "pod-configmaps-ebd06923-e374-4c02-9dc2-b6d59a5c9da1" satisfied condition "success or failure" Mar 8 14:09:12.463: INFO: Trying to get logs from node kind-worker pod pod-configmaps-ebd06923-e374-4c02-9dc2-b6d59a5c9da1 container configmap-volume-test: STEP: delete the pod Mar 8 14:09:12.492: INFO: Waiting for pod pod-configmaps-ebd06923-e374-4c02-9dc2-b6d59a5c9da1 to disappear Mar 8 14:09:12.496: INFO: Pod pod-configmaps-ebd06923-e374-4c02-9dc2-b6d59a5c9da1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:09:12.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9022" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3686,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:09:12.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 8 14:09:12.547: INFO: >>> kubeConfig: /root/.kube/config Mar 8 14:09:16.015: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:09:28.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4227" for this suite. • [SLOW TEST:16.271 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":223,"skipped":3698,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:09:28.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 14:09:28.827: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca85b006-8393-4157-a364-16514f96751d" in namespace "projected-2599" to be "success or failure" Mar 8 14:09:28.863: INFO: Pod "downwardapi-volume-ca85b006-8393-4157-a364-16514f96751d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.689546ms Mar 8 14:09:30.867: INFO: Pod "downwardapi-volume-ca85b006-8393-4157-a364-16514f96751d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039322439s STEP: Saw pod success Mar 8 14:09:30.867: INFO: Pod "downwardapi-volume-ca85b006-8393-4157-a364-16514f96751d" satisfied condition "success or failure" Mar 8 14:09:30.869: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-ca85b006-8393-4157-a364-16514f96751d container client-container: STEP: delete the pod Mar 8 14:09:30.896: INFO: Waiting for pod downwardapi-volume-ca85b006-8393-4157-a364-16514f96751d to disappear Mar 8 14:09:30.900: INFO: Pod downwardapi-volume-ca85b006-8393-4157-a364-16514f96751d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:09:30.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2599" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3701,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:09:30.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 8 14:09:30.973: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:09:49.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2088" for this suite. • [SLOW TEST:19.058 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":225,"skipped":3708,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:09:49.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-d476883e-bf0b-4f5c-9d5d-375675b1196a in namespace container-probe-1900 Mar 8 14:09:52.022: INFO: Started pod busybox-d476883e-bf0b-4f5c-9d5d-375675b1196a in namespace container-probe-1900 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 14:09:52.025: INFO: Initial restart count of pod busybox-d476883e-bf0b-4f5c-9d5d-375675b1196a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:13:52.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1900" for this suite. • [SLOW TEST:242.615 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3729,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:13:52.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:13:52.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1114" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":227,"skipped":3737,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:13:52.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:13:52.709: INFO: Creating deployment "webserver-deployment" Mar 8 14:13:52.716: INFO: Waiting for observed generation 1 Mar 8 14:13:54.788: INFO: Waiting for all required pods to come up Mar 8 14:13:54.793: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 8 14:13:56.803: INFO: Waiting for deployment "webserver-deployment" to complete Mar 8 14:13:56.809: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 8 14:13:56.815: INFO: Updating deployment webserver-deployment Mar 8 14:13:56.815: INFO: Waiting for observed generation 2 Mar 8 14:13:58.822: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 8 14:13:58.825: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 8 14:13:58.828: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 14:13:58.836: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 8 14:13:58.836: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 8 14:13:58.839: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 8 14:13:58.843: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 8 14:13:58.843: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 8 14:13:58.849: INFO: Updating deployment webserver-deployment Mar 8 14:13:58.849: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 8 14:13:58.860: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 8 14:13:58.879: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 14:13:58.975: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7363 /apis/apps/v1/namespaces/deployment-7363/deployments/webserver-deployment b421fe39-12e4-41ed-8137-92934ba16692 28081 3 2020-03-08 14:13:52 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a08c48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-08 14:13:57 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-08 14:13:58 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 8 14:13:59.006: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7363 /apis/apps/v1/namespaces/deployment-7363/replicasets/webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 28129 3 2020-03-08 14:13:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b421fe39-12e4-41ed-8137-92934ba16692 0xc002a61dd7 0xc002a61dd8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a61e48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 8 14:13:59.006: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 8 14:13:59.007: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7363 /apis/apps/v1/namespaces/deployment-7363/replicasets/webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 28121 3 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b421fe39-12e4-41ed-8137-92934ba16692 0xc002a61d17 0xc002a61d18}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a61d78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 8 14:13:59.071: INFO: Pod "webserver-deployment-595b5b9587-4jl7m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4jl7m webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-4jl7m ec75092e-c695-4e0d-953e-f8a07a4de35a 28095 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc002389397 0xc002389398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.072: INFO: Pod "webserver-deployment-595b5b9587-4zm8j" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4zm8j webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-4zm8j 4d77e6a6-03ca-4b8e-b71a-a5b6670ac06b 28120 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc002389620 0xc002389621}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-08 14:13:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.072: INFO: Pod "webserver-deployment-595b5b9587-5jzzq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5jzzq webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-5jzzq a6033ebb-b254-4c1f-98c5-4e042b116d0e 27955 0 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc002389820 0xc002389821}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.187,StartTime:2020-03-08 14:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:13:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://78d0977e39356b74eafddd3a91da4eb9de55b78c2698a3975b54c6596afa6986,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.072: INFO: Pod "webserver-deployment-595b5b9587-5vl8s" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5vl8s webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-5vl8s 25272bc5-6744-4697-ad1d-c03e0d4ce6c2 28105 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc002389a60 0xc002389a61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.073: INFO: Pod "webserver-deployment-595b5b9587-7j5pz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7j5pz webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-7j5pz adf151ca-759c-468b-b5c0-3d78d18b9df6 27957 0 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc002389c70 0xc002389c71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.174,StartTime:2020-03-08 14:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:13:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://21947ca6188e46d9c249d1df9bd35bdb66aa9d07dd08adfaf7728419722d779f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.073: INFO: Pod "webserver-deployment-595b5b9587-bj6rk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bj6rk webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-bj6rk 1dd2bd20-5526-4b8e-897d-a6c7a4487795 28086 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc002389e70 0xc002389e71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.073: INFO: Pod "webserver-deployment-595b5b9587-g2kxq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g2kxq webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-g2kxq 7da5531a-29f5-45bf-b77d-237eea64cf78 28118 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc002389f80 0xc002389f81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.074: INFO: Pod "webserver-deployment-595b5b9587-g7krb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g7krb webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-g7krb aeeda1bf-c999-48cb-9221-286e8e8f0b57 27971 0 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b60d0 0xc0018b60d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.188,StartTime:2020-03-08 14:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:13:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6bbb2f58781e98244bbf85752a1dfaa55651dc37b0dc951884c15dbbc77de595,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.074: INFO: Pod "webserver-deployment-595b5b9587-gfsl7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gfsl7 webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-gfsl7 5ddf4deb-03f5-43fa-a9f5-2b47ececa0f6 28113 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b6240 0xc0018b6241}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.074: INFO: Pod "webserver-deployment-595b5b9587-hdnpt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hdnpt webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-hdnpt 95328340-7b69-4a88-ab0c-625ac1f4d25f 27949 0 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b6360 0xc0018b6361}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.186,StartTime:2020-03-08 14:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:13:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f9d6e1bd41dddde29dea63b72f692b856c88b660d99a37daf39967da79516f42,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.074: INFO: Pod "webserver-deployment-595b5b9587-hwp97" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hwp97 webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-hwp97 c4efd563-86d6-4c81-b3c0-816c3f2a3f2e 28112 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b6620 0xc0018b6621}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.075: INFO: Pod "webserver-deployment-595b5b9587-kvnsh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kvnsh webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-kvnsh b48e60e9-655b-4dfa-a830-4e5091b6e505 27959 0 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b67c0 0xc0018b67c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.177,StartTime:2020-03-08 14:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:13:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://63bf7ae844cacf774533d7edaf4f449dd2559988856c1424e3d3f949bfd293a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.075: INFO: Pod "webserver-deployment-595b5b9587-l8q7s" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l8q7s webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-l8q7s 5d2bc8ba-8185-4f8d-9766-b135da4b5c39 27952 0 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b6c50 0xc0018b6c51}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.178,StartTime:2020-03-08 14:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:13:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f32b011884a7af1fb021a22a7d4aeee6d8f49a1a6e8ce6c5c19b74566b5512ad,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.076: INFO: Pod "webserver-deployment-595b5b9587-lmr4g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lmr4g webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-lmr4g b9caf777-ec63-42e4-b599-5060e58ba9d0 28116 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b6e50 0xc0018b6e51}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.076: INFO: Pod "webserver-deployment-595b5b9587-m9qr9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m9qr9 webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-m9qr9 2648df59-a2cb-4dfe-9f3d-60e18004b283 27947 0 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b7040 0xc0018b7041}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.175,StartTime:2020-03-08 14:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:13:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://32e3b31f69a9d483960a72d26f3658ab1505c371bf4fbefc09cf9749ea1d0359,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.076: INFO: Pod "webserver-deployment-595b5b9587-n784h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n784h webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-n784h 47bca17e-675a-4aee-b846-6aca9d61b7f0 28098 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b71b0 0xc0018b71b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.077: INFO: Pod "webserver-deployment-595b5b9587-v8kff" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v8kff webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-v8kff 1d2776ab-6cb5-4611-a207-6eefd6a0c71e 28106 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b72c0 0xc0018b72c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.077: INFO: Pod "webserver-deployment-595b5b9587-vdl62" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vdl62 webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-vdl62 9b17db02-0551-49a6-a8a7-cdc525c9a3e7 28084 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b73e0 0xc0018b73e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.077: INFO: Pod "webserver-deployment-595b5b9587-vs75d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vs75d webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-vs75d c7c9ed4c-a808-49c2-ae93-2d3d7b3b3b3d 28117 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b74f0 0xc0018b74f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.077: INFO: Pod "webserver-deployment-595b5b9587-z5cbg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z5cbg webserver-deployment-595b5b9587- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-595b5b9587-z5cbg df71c534-21b7-454b-bc62-f92ced15e3d4 27963 0 2020-03-08 14:13:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 02f19ea6-aa49-4edf-929b-102b6cd72d46 0xc0018b7600 0xc0018b7601}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.176,StartTime:2020-03-08 14:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:13:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ca81b62e07d4f7b400384a1ca2e2cf1534ca2f9b6b02a19bf1bc9f438126cdc6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.078: INFO: Pod "webserver-deployment-c7997dcc8-2mzzn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2mzzn webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-2mzzn 8d03576d-2a2b-4e7e-8ec4-d33076f75595 28022 0 2020-03-08 14:13:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0018b7770 0xc0018b7771}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-08 14:13:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.078: INFO: Pod "webserver-deployment-c7997dcc8-4m5xj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4m5xj webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-4m5xj da698675-6165-4110-9b16-1bf933aa8c73 28091 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0018b78e0 0xc0018b78e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.078: INFO: Pod "webserver-deployment-c7997dcc8-4t9nx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4t9nx webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-4t9nx aaaba0cd-b409-4d53-bfc1-2b577f8e18a1 28110 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0018b7a10 0xc0018b7a11}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.079: INFO: Pod "webserver-deployment-c7997dcc8-97hg4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-97hg4 webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-97hg4 1fd978e4-f530-4a90-b433-0d18ed6ad701 28107 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0018b7b40 0xc0018b7b41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.079: INFO: Pod "webserver-deployment-c7997dcc8-9wrxd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9wrxd webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-9wrxd e7be9475-9697-4425-a94f-734d4d83f972 28013 0 2020-03-08 14:13:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0018b7c70 0xc0018b7c71}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-08 14:13:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.079: INFO: Pod "webserver-deployment-c7997dcc8-mvxb8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mvxb8 webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-mvxb8 1e22d9e4-c826-4833-aa0c-513aa3afde4f 28101 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0018b7e20 0xc0018b7e21}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.080: INFO: Pod "webserver-deployment-c7997dcc8-p228n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p228n webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-p228n e79cbc2f-d508-47d4-84a5-0561058dd84a 28111 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0018b7f40 0xc0018b7f41}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.080: INFO: Pod "webserver-deployment-c7997dcc8-pc9mh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pc9mh webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-pc9mh 67804ad4-0b79-4344-9952-b0794b96b86d 28108 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0027d0060 0xc0027d0061}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.080: INFO: Pod "webserver-deployment-c7997dcc8-pq7vn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pq7vn webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-pq7vn 36ce6673-6a1b-4c22-af09-5944c5cec760 28123 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0027d0180 0xc0027d0181}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.080: INFO: Pod "webserver-deployment-c7997dcc8-rl6jx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rl6jx webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-rl6jx a7f030db-f472-4aed-99f5-d0dad7aef6fd 28032 0 2020-03-08 14:13:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0027d02a0 0xc0027d02a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-03-08 14:13:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.081: INFO: Pod "webserver-deployment-c7997dcc8-tbsfk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tbsfk webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-tbsfk 88689876-0950-47e0-ba0a-6232b2b9d41d 28030 0 2020-03-08 14:13:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0027d0410 0xc0027d0411}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-08 14:13:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.081: INFO: Pod "webserver-deployment-c7997dcc8-tvwbx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tvwbx webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-tvwbx a7b02f9a-83fc-44e9-82a5-f64e46c573a9 28006 0 2020-03-08 14:13:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0027d0580 0xc0027d0581}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-08 14:13:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 8 14:13:59.081: INFO: Pod "webserver-deployment-c7997dcc8-x7jjd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x7jjd webserver-deployment-c7997dcc8- deployment-7363 /api/v1/namespaces/deployment-7363/pods/webserver-deployment-c7997dcc8-x7jjd 17d0df83-7367-4bd7-882c-a0c950131940 28127 0 2020-03-08 14:13:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6956c7f-ea49-4e18-abe5-15cc8281b604 0xc0027d06f0 0xc0027d06f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t7vd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t7vd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t7vd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:13:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-03-08 14:13:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:13:59.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7363" for this suite. • [SLOW TEST:6.544 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":228,"skipped":3747,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:13:59.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9978 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9978 STEP: creating replication controller externalsvc in namespace services-9978 I0308 14:13:59.580844 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9978, replica count: 2 I0308 14:14:02.631234 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 14:14:05.631357 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 8 14:14:05.818: INFO: Creating new exec pod Mar 8 14:14:11.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9978 execpodhmq7b -- /bin/sh -x -c nslookup nodeport-service' Mar 8 14:14:12.137: INFO: stderr: "I0308 14:14:12.062804 3255 log.go:172] (0xc0000f51e0) (0xc0005dbd60) Create stream\nI0308 14:14:12.062851 3255 log.go:172] (0xc0000f51e0) (0xc0005dbd60) Stream added, broadcasting: 1\nI0308 14:14:12.064602 3255 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0308 14:14:12.064627 3255 log.go:172] (0xc0000f51e0) (0xc000b32000) Create stream\nI0308 14:14:12.064638 3255 log.go:172] (0xc0000f51e0) (0xc000b32000) Stream added, broadcasting: 3\nI0308 14:14:12.065364 3255 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0308 14:14:12.065382 3255 log.go:172] (0xc0000f51e0) (0xc0005dbe00) Create stream\nI0308 14:14:12.065388 3255 log.go:172] (0xc0000f51e0) (0xc0005dbe00) Stream added, broadcasting: 5\nI0308 14:14:12.066076 3255 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0308 14:14:12.121785 3255 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0308 14:14:12.121808 3255 log.go:172] (0xc0005dbe00) (5) Data frame handling\nI0308 14:14:12.121825 3255 log.go:172] (0xc0005dbe00) (5) Data frame sent\n+ nslookup nodeport-service\nI0308 14:14:12.130545 3255 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0308 14:14:12.130560 3255 log.go:172] (0xc000b32000) (3) Data frame handling\nI0308 14:14:12.130568 3255 log.go:172] (0xc000b32000) (3) Data frame sent\nI0308 14:14:12.131721 3255 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0308 14:14:12.131783 3255 log.go:172] (0xc000b32000) (3) Data frame handling\nI0308 14:14:12.131819 3255 log.go:172] (0xc000b32000) (3) Data frame sent\nI0308 14:14:12.132039 3255 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0308 14:14:12.132066 3255 log.go:172] (0xc000b32000) (3) Data frame handling\nI0308 14:14:12.132245 3255 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0308 14:14:12.132262 3255 log.go:172] (0xc0005dbe00) (5) Data frame handling\nI0308 14:14:12.133877 3255 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0308 14:14:12.133899 3255 log.go:172] (0xc0005dbd60) (1) Data frame handling\nI0308 14:14:12.133910 3255 log.go:172] (0xc0005dbd60) (1) Data frame sent\nI0308 14:14:12.133925 3255 log.go:172] (0xc0000f51e0) (0xc0005dbd60) Stream removed, broadcasting: 1\nI0308 14:14:12.133949 3255 log.go:172] (0xc0000f51e0) Go away received\nI0308 14:14:12.134290 3255 log.go:172] (0xc0000f51e0) (0xc0005dbd60) Stream removed, broadcasting: 1\nI0308 14:14:12.134314 3255 log.go:172] (0xc0000f51e0) (0xc000b32000) Stream removed, broadcasting: 3\nI0308 14:14:12.134324 3255 log.go:172] (0xc0000f51e0) (0xc0005dbe00) Stream removed, broadcasting: 5\n" Mar 8 14:14:12.137: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9978.svc.cluster.local\tcanonical name = externalsvc.services-9978.svc.cluster.local.\nName:\texternalsvc.services-9978.svc.cluster.local\nAddress: 10.96.252.105\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9978, will wait for the garbage collector to delete the pods Mar 8 14:14:12.195: INFO: Deleting ReplicationController externalsvc took: 5.385999ms Mar 8 14:14:12.296: INFO: Terminating ReplicationController externalsvc pods took: 100.291148ms Mar 8 14:14:19.510: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:14:19.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9978" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:20.331 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":229,"skipped":3750,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:14:19.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-859bae6f-aecd-44c6-af3a-3b37732c3bdb STEP: Creating a pod to test consume secrets Mar 8 14:14:19.599: INFO: Waiting up to 5m0s for pod "pod-secrets-a9bd6190-ab63-413e-8801-13e1256309fc" in namespace "secrets-113" to be "success or failure" Mar 8 14:14:19.615: INFO: Pod "pod-secrets-a9bd6190-ab63-413e-8801-13e1256309fc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.006828ms Mar 8 14:14:21.620: INFO: Pod "pod-secrets-a9bd6190-ab63-413e-8801-13e1256309fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020414883s STEP: Saw pod success Mar 8 14:14:21.620: INFO: Pod "pod-secrets-a9bd6190-ab63-413e-8801-13e1256309fc" satisfied condition "success or failure" Mar 8 14:14:21.623: INFO: Trying to get logs from node kind-worker pod pod-secrets-a9bd6190-ab63-413e-8801-13e1256309fc container secret-volume-test: STEP: delete the pod Mar 8 14:14:21.675: INFO: Waiting for pod pod-secrets-a9bd6190-ab63-413e-8801-13e1256309fc to disappear Mar 8 14:14:21.682: INFO: Pod pod-secrets-a9bd6190-ab63-413e-8801-13e1256309fc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:14:21.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-113" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3770,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:14:21.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-a8bc5669-e259-48ca-b39e-1576027fa3b4 STEP: Creating a pod to test consume configMaps Mar 8 14:14:21.768: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-355d9d3e-f0e1-4847-b444-9c7ba3206349" in namespace "projected-5348" to be "success or failure" Mar 8 14:14:21.772: INFO: Pod "pod-projected-configmaps-355d9d3e-f0e1-4847-b444-9c7ba3206349": Phase="Pending", Reason="", readiness=false. Elapsed: 3.950209ms Mar 8 14:14:23.776: INFO: Pod "pod-projected-configmaps-355d9d3e-f0e1-4847-b444-9c7ba3206349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007906881s STEP: Saw pod success Mar 8 14:14:23.776: INFO: Pod "pod-projected-configmaps-355d9d3e-f0e1-4847-b444-9c7ba3206349" satisfied condition "success or failure" Mar 8 14:14:23.778: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-355d9d3e-f0e1-4847-b444-9c7ba3206349 container projected-configmap-volume-test: STEP: delete the pod Mar 8 14:14:23.812: INFO: Waiting for pod pod-projected-configmaps-355d9d3e-f0e1-4847-b444-9c7ba3206349 to disappear Mar 8 14:14:23.818: INFO: Pod pod-projected-configmaps-355d9d3e-f0e1-4847-b444-9c7ba3206349 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:14:23.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5348" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3773,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:14:23.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-4b332c17-af5d-41f2-8498-b9406bba7a12 STEP: Creating configMap with name cm-test-opt-upd-b524d0c1-d24f-4b5d-b0fb-da43483a7f2e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4b332c17-af5d-41f2-8498-b9406bba7a12 STEP: Updating configmap cm-test-opt-upd-b524d0c1-d24f-4b5d-b0fb-da43483a7f2e STEP: Creating configMap with name cm-test-opt-create-c6e4e1cc-7c7b-4cec-abf6-631fee00728a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:15:34.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4724" for this suite. • [SLOW TEST:70.485 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3786,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:15:34.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 14:15:35.067: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 14:15:38.125: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 8 14:15:38.148: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:15:38.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1641" for this suite. STEP: Destroying namespace "webhook-1641-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":233,"skipped":3802,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:15:38.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 8 14:15:38.324: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:15:41.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5700" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":234,"skipped":3839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:15:41.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:15:44.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1599" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":235,"skipped":3870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:15:44.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 8 14:15:44.699: INFO: Waiting up to 5m0s for pod "var-expansion-98f71a20-8ed7-4026-96bd-47cc1bc86de8" in namespace "var-expansion-2390" to be "success or failure" Mar 8 14:15:44.719: INFO: Pod "var-expansion-98f71a20-8ed7-4026-96bd-47cc1bc86de8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.326902ms Mar 8 14:15:46.722: INFO: Pod "var-expansion-98f71a20-8ed7-4026-96bd-47cc1bc86de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022694887s STEP: Saw pod success Mar 8 14:15:46.722: INFO: Pod "var-expansion-98f71a20-8ed7-4026-96bd-47cc1bc86de8" satisfied condition "success or failure" Mar 8 14:15:46.724: INFO: Trying to get logs from node kind-worker pod var-expansion-98f71a20-8ed7-4026-96bd-47cc1bc86de8 container dapi-container: STEP: delete the pod Mar 8 14:15:46.751: INFO: Waiting for pod var-expansion-98f71a20-8ed7-4026-96bd-47cc1bc86de8 to disappear Mar 8 14:15:46.755: INFO: Pod var-expansion-98f71a20-8ed7-4026-96bd-47cc1bc86de8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:15:46.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2390" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3898,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:15:46.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:15:46.828: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 8 14:15:48.867: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:15:49.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7776" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":237,"skipped":3914,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:15:49.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 8 14:15:50.360: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 8 14:15:53.399: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 8 14:15:55.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4121 to-be-attached-pod -i -c=container1' Mar 8 14:15:55.528: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:15:55.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4121" for this suite. STEP: Destroying namespace "webhook-4121-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.708 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":238,"skipped":3922,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:15:55.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2768, will wait for the garbage collector to delete the pods Mar 8 14:15:57.760: INFO: Deleting Job.batch foo took: 5.557286ms Mar 8 14:15:57.860: INFO: Terminating Job.batch foo pods took: 100.185552ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:16:31.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2768" for this suite. • [SLOW TEST:36.071 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":239,"skipped":3928,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:16:31.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-10ce5bfb-95d0-41fc-b1c1-cbf54a76895f STEP: Creating a pod to test consume secrets Mar 8 14:16:31.740: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5396983-1e3d-4bc1-a48b-b5f7c473f7b8" in namespace "projected-5219" to be "success or failure" Mar 8 14:16:31.745: INFO: Pod "pod-projected-secrets-f5396983-1e3d-4bc1-a48b-b5f7c473f7b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417001ms Mar 8 14:16:33.748: INFO: Pod "pod-projected-secrets-f5396983-1e3d-4bc1-a48b-b5f7c473f7b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008392077s STEP: Saw pod success Mar 8 14:16:33.749: INFO: Pod "pod-projected-secrets-f5396983-1e3d-4bc1-a48b-b5f7c473f7b8" satisfied condition "success or failure" Mar 8 14:16:33.752: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-f5396983-1e3d-4bc1-a48b-b5f7c473f7b8 container secret-volume-test: STEP: delete the pod Mar 8 14:16:33.782: INFO: Waiting for pod pod-projected-secrets-f5396983-1e3d-4bc1-a48b-b5f7c473f7b8 to disappear Mar 8 14:16:33.787: INFO: Pod pod-projected-secrets-f5396983-1e3d-4bc1-a48b-b5f7c473f7b8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:16:33.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5219" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3942,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:16:33.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 14:16:39.873454 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 14:16:39.873: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:16:39.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4516" for this suite. • [SLOW TEST:6.084 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":241,"skipped":3944,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:16:39.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 14:16:39.957: INFO: Waiting up to 5m0s for pod "pod-b8bf0587-4cdb-429d-92cc-90ec7a7cf3ca" in namespace "emptydir-7995" to be "success or failure" Mar 8 14:16:39.960: INFO: Pod "pod-b8bf0587-4cdb-429d-92cc-90ec7a7cf3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.500406ms Mar 8 14:16:41.964: INFO: Pod "pod-b8bf0587-4cdb-429d-92cc-90ec7a7cf3ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007121046s STEP: Saw pod success Mar 8 14:16:41.964: INFO: Pod "pod-b8bf0587-4cdb-429d-92cc-90ec7a7cf3ca" satisfied condition "success or failure" Mar 8 14:16:41.967: INFO: Trying to get logs from node kind-worker pod pod-b8bf0587-4cdb-429d-92cc-90ec7a7cf3ca container test-container: STEP: delete the pod Mar 8 14:16:41.991: INFO: Waiting for pod pod-b8bf0587-4cdb-429d-92cc-90ec7a7cf3ca to disappear Mar 8 14:16:41.996: INFO: Pod pod-b8bf0587-4cdb-429d-92cc-90ec7a7cf3ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:16:41.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7995" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3947,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:16:42.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-2908 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2908 to expose endpoints map[] Mar 8 14:16:42.098: INFO: Get endpoints failed (8.888333ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 8 14:16:43.102: INFO: successfully validated that service endpoint-test2 in namespace services-2908 exposes endpoints map[] (1.012082627s elapsed) STEP: Creating pod pod1 in namespace services-2908 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2908 to expose endpoints map[pod1:[80]] Mar 8 14:16:45.164: INFO: successfully validated that service endpoint-test2 in namespace services-2908 exposes endpoints map[pod1:[80]] (2.056248657s elapsed) STEP: Creating pod pod2 in namespace services-2908 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2908 to expose endpoints map[pod1:[80] pod2:[80]] Mar 8 14:16:47.232: INFO: successfully validated that service endpoint-test2 in namespace services-2908 exposes endpoints map[pod1:[80] pod2:[80]] (2.062626369s elapsed) STEP: Deleting pod pod1 in namespace services-2908 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2908 to expose endpoints map[pod2:[80]] Mar 8 14:16:47.251: INFO: successfully validated that service endpoint-test2 in namespace services-2908 exposes endpoints map[pod2:[80]] (14.60475ms elapsed) STEP: Deleting pod pod2 in namespace services-2908 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2908 to expose endpoints map[] Mar 8 14:16:47.288: INFO: successfully validated that service endpoint-test2 in namespace services-2908 exposes endpoints map[] (33.658072ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:16:47.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2908" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:5.325 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":243,"skipped":3965,"failed":0} [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:16:47.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:16:47.401: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 8 14:16:52.426: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 14:16:52.426: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 8 14:16:54.481: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1024 /apis/apps/v1/namespaces/deployment-1024/deployments/test-cleanup-deployment 72fc42db-027b-47f1-89db-b355faaaad36 29770 1 2020-03-08 14:16:52 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004cfa9a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-08 14:16:52 +0000 UTC,LastTransitionTime:2020-03-08 14:16:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-03-08 14:16:53 +0000 UTC,LastTransitionTime:2020-03-08 14:16:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 8 14:16:54.484: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-1024 /apis/apps/v1/namespaces/deployment-1024/replicasets/test-cleanup-deployment-55ffc6b7b6 4412cebd-a7fa-4402-8f72-40248d2d8c32 29759 1 2020-03-08 14:16:52 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 72fc42db-027b-47f1-89db-b355faaaad36 0xc004d28337 0xc004d28338}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004d283a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 8 14:16:54.488: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-p84qf" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-p84qf test-cleanup-deployment-55ffc6b7b6- deployment-1024 /api/v1/namespaces/deployment-1024/pods/test-cleanup-deployment-55ffc6b7b6-p84qf ef9d555a-1900-4971-940a-d195dceae28f 29758 0 2020-03-08 14:16:52 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 4412cebd-a7fa-4402-8f72-40248d2d8c32 0xc002b2fcb7 0xc002b2fcb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w6sw9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w6sw9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w6sw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:16:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:16:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:16:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-08 14:16:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.204,StartTime:2020-03-08 14:16:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-08 14:16:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://4e8602ffd9df032204476c79c47d192236cc08571dfd7951ed3ea48e3f408160,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:16:54.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1024" for this suite. • [SLOW TEST:7.169 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":244,"skipped":3965,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:16:54.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 8 14:16:54.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2241' Mar 8 14:16:54.669: INFO: stderr: "" Mar 8 14:16:54.669: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 8 14:16:59.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2241 -o json' Mar 8 14:16:59.802: INFO: stderr: "" Mar 8 14:16:59.802: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-08T14:16:54Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2241\",\n \"resourceVersion\": \"29792\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2241/pods/e2e-test-httpd-pod\",\n \"uid\": \"af630ac4-de93-4380-99a1-56042c4ad8a5\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8xtcf\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kind-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8xtcf\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8xtcf\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T14:16:54Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T14:16:56Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T14:16:56Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T14:16:54Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://78ae21ffd39e5110da659e2d19fc66e928135300ceac5171249ed123e2c33f5e\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-08T14:16:55Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.223\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.223\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-08T14:16:54Z\"\n }\n}\n" STEP: replace the image in the pod Mar 8 14:16:59.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2241' Mar 8 14:17:00.104: INFO: stderr: "" Mar 8 14:17:00.104: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882 Mar 8 14:17:00.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2241' Mar 8 14:17:01.715: INFO: stderr: "" Mar 8 14:17:01.715: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:17:01.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2241" for this suite. • [SLOW TEST:7.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":245,"skipped":3978,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:17:01.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 14:17:01.848: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbed8286-f0c7-4acd-ba3f-6b57a939ba16" in namespace "downward-api-5816" to be "success or failure" Mar 8 14:17:01.860: INFO: Pod "downwardapi-volume-dbed8286-f0c7-4acd-ba3f-6b57a939ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 12.040066ms Mar 8 14:17:03.863: INFO: Pod "downwardapi-volume-dbed8286-f0c7-4acd-ba3f-6b57a939ba16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015937112s STEP: Saw pod success Mar 8 14:17:03.864: INFO: Pod "downwardapi-volume-dbed8286-f0c7-4acd-ba3f-6b57a939ba16" satisfied condition "success or failure" Mar 8 14:17:03.866: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-dbed8286-f0c7-4acd-ba3f-6b57a939ba16 container client-container: STEP: delete the pod Mar 8 14:17:03.885: INFO: Waiting for pod downwardapi-volume-dbed8286-f0c7-4acd-ba3f-6b57a939ba16 to disappear Mar 8 14:17:03.902: INFO: Pod downwardapi-volume-dbed8286-f0c7-4acd-ba3f-6b57a939ba16 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:17:03.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5816" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3995,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:17:03.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 14:17:03.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2fcb38e-ac77-4b17-a7cb-4671d230168f" in namespace "projected-6657" to be "success or failure" Mar 8 14:17:03.970: INFO: Pod "downwardapi-volume-c2fcb38e-ac77-4b17-a7cb-4671d230168f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.120166ms Mar 8 14:17:05.974: INFO: Pod "downwardapi-volume-c2fcb38e-ac77-4b17-a7cb-4671d230168f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022941489s STEP: Saw pod success Mar 8 14:17:05.974: INFO: Pod "downwardapi-volume-c2fcb38e-ac77-4b17-a7cb-4671d230168f" satisfied condition "success or failure" Mar 8 14:17:05.977: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-c2fcb38e-ac77-4b17-a7cb-4671d230168f container client-container: STEP: delete the pod Mar 8 14:17:06.011: INFO: Waiting for pod downwardapi-volume-c2fcb38e-ac77-4b17-a7cb-4671d230168f to disappear Mar 8 14:17:06.042: INFO: Pod downwardapi-volume-c2fcb38e-ac77-4b17-a7cb-4671d230168f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:17:06.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6657" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3998,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:17:06.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:17:06.103: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 8 14:17:08.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8769 create -f -' Mar 8 14:17:10.747: INFO: stderr: "" Mar 8 14:17:10.747: INFO: stdout: "e2e-test-crd-publish-openapi-5249-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 8 14:17:10.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8769 delete e2e-test-crd-publish-openapi-5249-crds test-foo' Mar 8 14:17:10.887: INFO: stderr: "" Mar 8 14:17:10.887: INFO: stdout: "e2e-test-crd-publish-openapi-5249-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 8 14:17:10.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8769 apply -f -' Mar 8 14:17:11.234: INFO: stderr: "" Mar 8 14:17:11.234: INFO: stdout: "e2e-test-crd-publish-openapi-5249-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 8 14:17:11.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8769 delete e2e-test-crd-publish-openapi-5249-crds test-foo' Mar 8 14:17:11.328: INFO: stderr: "" Mar 8 14:17:11.328: INFO: stdout: "e2e-test-crd-publish-openapi-5249-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 8 14:17:11.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8769 create -f -' Mar 8 14:17:11.540: INFO: rc: 1 Mar 8 14:17:11.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8769 apply -f -' Mar 8 14:17:11.805: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 8 14:17:11.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8769 create -f -' Mar 8 14:17:12.073: INFO: rc: 1 Mar 8 14:17:12.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8769 apply -f -' Mar 8 14:17:12.358: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 8 14:17:12.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5249-crds' Mar 8 14:17:12.628: INFO: stderr: "" Mar 8 14:17:12.628: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5249-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 8 14:17:12.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5249-crds.metadata' Mar 8 14:17:12.887: INFO: stderr: "" Mar 8 14:17:12.887: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5249-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 8 14:17:12.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5249-crds.spec' Mar 8 14:17:13.170: INFO: stderr: "" Mar 8 14:17:13.170: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5249-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 8 14:17:13.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5249-crds.spec.bars' Mar 8 14:17:13.455: INFO: stderr: "" Mar 8 14:17:13.455: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5249-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 8 14:17:13.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5249-crds.spec.bars2' Mar 8 14:17:13.736: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:17:15.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8769" for this suite. • [SLOW TEST:9.607 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":248,"skipped":4007,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:17:15.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 8 14:17:20.246: INFO: Successfully updated pod "adopt-release-qlknr" STEP: Checking that the Job readopts the Pod Mar 8 14:17:20.246: INFO: Waiting up to 15m0s for pod "adopt-release-qlknr" in namespace "job-9463" to be "adopted" Mar 8 14:17:20.261: INFO: Pod "adopt-release-qlknr": Phase="Running", Reason="", readiness=true. Elapsed: 15.381297ms Mar 8 14:17:22.265: INFO: Pod "adopt-release-qlknr": Phase="Running", Reason="", readiness=true. Elapsed: 2.01921441s Mar 8 14:17:22.265: INFO: Pod "adopt-release-qlknr" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 8 14:17:22.774: INFO: Successfully updated pod "adopt-release-qlknr" STEP: Checking that the Job releases the Pod Mar 8 14:17:22.774: INFO: Waiting up to 15m0s for pod "adopt-release-qlknr" in namespace "job-9463" to be "released" Mar 8 14:17:22.791: INFO: Pod "adopt-release-qlknr": Phase="Running", Reason="", readiness=true. Elapsed: 17.065798ms Mar 8 14:17:24.795: INFO: Pod "adopt-release-qlknr": Phase="Running", Reason="", readiness=true. Elapsed: 2.020817122s Mar 8 14:17:24.795: INFO: Pod "adopt-release-qlknr" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:17:24.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9463" for this suite. • [SLOW TEST:9.138 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":249,"skipped":4009,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:17:24.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 8 14:17:24.850: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 8 14:17:25.449: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 8 14:17:27.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 14:17:29.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 14:17:31.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 14:17:33.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 14:17:35.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719273845, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 14:17:39.757: INFO: Waited 2.218176841s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:17:40.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9971" for this suite. • [SLOW TEST:15.484 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":250,"skipped":4015,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:17:40.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 8 14:17:40.879: INFO: created pod pod-service-account-defaultsa Mar 8 14:17:40.879: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 8 14:17:40.885: INFO: created pod pod-service-account-mountsa Mar 8 14:17:40.885: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 8 14:17:40.892: INFO: created pod pod-service-account-nomountsa Mar 8 14:17:40.892: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 8 14:17:40.921: INFO: created pod pod-service-account-defaultsa-mountspec Mar 8 14:17:40.921: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 8 14:17:40.945: INFO: created pod pod-service-account-mountsa-mountspec Mar 8 14:17:40.945: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 8 14:17:40.951: INFO: created pod pod-service-account-nomountsa-mountspec Mar 8 14:17:40.951: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 8 14:17:41.002: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 8 14:17:41.002: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 8 14:17:41.017: INFO: created pod pod-service-account-mountsa-nomountspec Mar 8 14:17:41.017: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 8 14:17:41.045: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 8 14:17:41.045: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:17:41.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8084" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":251,"skipped":4019,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:17:41.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9002/configmap-test-b8b742a2-2d1b-41f4-ad9a-96f1f4ba0794 STEP: Creating a pod to test consume configMaps Mar 8 14:17:41.299: INFO: Waiting up to 5m0s for pod "pod-configmaps-44543f79-7422-4ec1-adc1-16be53faf1ee" in namespace "configmap-9002" to be "success or failure" Mar 8 14:17:41.340: INFO: Pod "pod-configmaps-44543f79-7422-4ec1-adc1-16be53faf1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 40.709626ms Mar 8 14:17:43.344: INFO: Pod "pod-configmaps-44543f79-7422-4ec1-adc1-16be53faf1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044442287s Mar 8 14:17:45.347: INFO: Pod "pod-configmaps-44543f79-7422-4ec1-adc1-16be53faf1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04723742s STEP: Saw pod success Mar 8 14:17:45.347: INFO: Pod "pod-configmaps-44543f79-7422-4ec1-adc1-16be53faf1ee" satisfied condition "success or failure" Mar 8 14:17:45.349: INFO: Trying to get logs from node kind-worker pod pod-configmaps-44543f79-7422-4ec1-adc1-16be53faf1ee container env-test: STEP: delete the pod Mar 8 14:17:45.392: INFO: Waiting for pod pod-configmaps-44543f79-7422-4ec1-adc1-16be53faf1ee to disappear Mar 8 14:17:45.401: INFO: Pod pod-configmaps-44543f79-7422-4ec1-adc1-16be53faf1ee no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:17:45.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9002" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4042,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:17:45.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:18:14.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8335" for this suite. STEP: Destroying namespace "nsdeletetest-5954" for this suite. Mar 8 14:18:14.675: INFO: Namespace nsdeletetest-5954 was already deleted STEP: Destroying namespace "nsdeletetest-7186" for this suite. • [SLOW TEST:29.270 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":253,"skipped":4057,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:18:14.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:18:14.726: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-890519c2-0ea8-4218-946e-9d48892749fe" in namespace "security-context-test-7910" to be "success or failure" Mar 8 14:18:14.730: INFO: Pod "busybox-privileged-false-890519c2-0ea8-4218-946e-9d48892749fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.598699ms Mar 8 14:18:16.734: INFO: Pod "busybox-privileged-false-890519c2-0ea8-4218-946e-9d48892749fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007811748s Mar 8 14:18:16.734: INFO: Pod "busybox-privileged-false-890519c2-0ea8-4218-946e-9d48892749fe" satisfied condition "success or failure" Mar 8 14:18:16.741: INFO: Got logs for pod "busybox-privileged-false-890519c2-0ea8-4218-946e-9d48892749fe": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:18:16.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7910" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:18:16.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6292.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6292.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6292.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6292.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6292.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6292.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 14:18:20.866: INFO: DNS probes using dns-6292/dns-test-0000cbe0-d94e-4fcd-a8de-96664a3b8208 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:18:20.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6292" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":255,"skipped":4103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:18:20.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-2990/secret-test-83863473-0e44-4050-911b-d07e32be6c99 STEP: Creating a pod to test consume secrets Mar 8 14:18:21.059: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6f9efd4-c113-4a46-94a5-2fa2be2a9498" in namespace "secrets-2990" to be "success or failure" Mar 8 14:18:21.078: INFO: Pod "pod-configmaps-d6f9efd4-c113-4a46-94a5-2fa2be2a9498": Phase="Pending", Reason="", readiness=false. Elapsed: 18.622925ms Mar 8 14:18:23.081: INFO: Pod "pod-configmaps-d6f9efd4-c113-4a46-94a5-2fa2be2a9498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021541607s STEP: Saw pod success Mar 8 14:18:23.081: INFO: Pod "pod-configmaps-d6f9efd4-c113-4a46-94a5-2fa2be2a9498" satisfied condition "success or failure" Mar 8 14:18:23.083: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-d6f9efd4-c113-4a46-94a5-2fa2be2a9498 container env-test: STEP: delete the pod Mar 8 14:18:23.097: INFO: Waiting for pod pod-configmaps-d6f9efd4-c113-4a46-94a5-2fa2be2a9498 to disappear Mar 8 14:18:23.121: INFO: Pod pod-configmaps-d6f9efd4-c113-4a46-94a5-2fa2be2a9498 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:18:23.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2990" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4135,"failed":0} ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:18:23.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:18:27.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3904" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":257,"skipped":4135,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:18:28.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6057 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 14:18:28.042: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 14:18:48.122: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.235:8080/dial?request=hostname&protocol=http&host=10.244.2.234&port=8080&tries=1'] Namespace:pod-network-test-6057 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:18:48.122: INFO: >>> kubeConfig: /root/.kube/config I0308 14:18:48.160716 6 log.go:172] (0xc002c24370) (0xc002836960) Create stream I0308 14:18:48.160767 6 log.go:172] (0xc002c24370) (0xc002836960) Stream added, broadcasting: 1 I0308 14:18:48.163818 6 log.go:172] (0xc002c24370) Reply frame received for 1 I0308 14:18:48.163856 6 log.go:172] (0xc002c24370) (0xc002226b40) Create stream I0308 14:18:48.163871 6 log.go:172] (0xc002c24370) (0xc002226b40) Stream added, broadcasting: 3 I0308 14:18:48.164851 6 log.go:172] (0xc002c24370) Reply frame received for 3 I0308 14:18:48.164882 6 log.go:172] (0xc002c24370) (0xc002836a00) Create stream I0308 14:18:48.164892 6 log.go:172] (0xc002c24370) (0xc002836a00) Stream added, broadcasting: 5 I0308 14:18:48.165905 6 log.go:172] (0xc002c24370) Reply frame received for 5 I0308 14:18:48.233244 6 log.go:172] (0xc002c24370) Data frame received for 3 I0308 14:18:48.233279 6 log.go:172] (0xc002226b40) (3) Data frame handling I0308 14:18:48.233311 6 log.go:172] (0xc002226b40) (3) Data frame sent I0308 14:18:48.233763 6 log.go:172] (0xc002c24370) Data frame received for 5 I0308 14:18:48.233814 6 log.go:172] (0xc002836a00) (5) Data frame handling I0308 14:18:48.233909 6 log.go:172] (0xc002c24370) Data frame received for 3 I0308 14:18:48.233929 6 log.go:172] (0xc002226b40) (3) Data frame handling I0308 14:18:48.236052 6 log.go:172] (0xc002c24370) Data frame received for 1 I0308 14:18:48.236080 6 log.go:172] (0xc002836960) (1) Data frame handling I0308 14:18:48.236103 6 log.go:172] (0xc002836960) (1) Data frame sent I0308 14:18:48.236131 6 log.go:172] (0xc002c24370) (0xc002836960) Stream removed, broadcasting: 1 I0308 14:18:48.236150 6 log.go:172] (0xc002c24370) Go away received I0308 14:18:48.236361 6 log.go:172] (0xc002c24370) (0xc002836960) Stream removed, broadcasting: 1 I0308 14:18:48.236397 6 log.go:172] (0xc002c24370) (0xc002226b40) Stream removed, broadcasting: 3 I0308 14:18:48.236415 6 log.go:172] (0xc002c24370) (0xc002836a00) Stream removed, broadcasting: 5 Mar 8 14:18:48.236: INFO: Waiting for responses: map[] Mar 8 14:18:48.239: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.235:8080/dial?request=hostname&protocol=http&host=10.244.1.215&port=8080&tries=1'] Namespace:pod-network-test-6057 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:18:48.239: INFO: >>> kubeConfig: /root/.kube/config I0308 14:18:48.272501 6 log.go:172] (0xc002482840) (0xc002226fa0) Create stream I0308 14:18:48.272545 6 log.go:172] (0xc002482840) (0xc002226fa0) Stream added, broadcasting: 1 I0308 14:18:48.276185 6 log.go:172] (0xc002482840) Reply frame received for 1 I0308 14:18:48.276223 6 log.go:172] (0xc002482840) (0xc0022fa820) Create stream I0308 14:18:48.276235 6 log.go:172] (0xc002482840) (0xc0022fa820) Stream added, broadcasting: 3 I0308 14:18:48.277335 6 log.go:172] (0xc002482840) Reply frame received for 3 I0308 14:18:48.277380 6 log.go:172] (0xc002482840) (0xc002227040) Create stream I0308 14:18:48.277392 6 log.go:172] (0xc002482840) (0xc002227040) Stream added, broadcasting: 5 I0308 14:18:48.278234 6 log.go:172] (0xc002482840) Reply frame received for 5 I0308 14:18:48.342889 6 log.go:172] (0xc002482840) Data frame received for 3 I0308 14:18:48.342910 6 log.go:172] (0xc0022fa820) (3) Data frame handling I0308 14:18:48.342925 6 log.go:172] (0xc0022fa820) (3) Data frame sent I0308 14:18:48.343461 6 log.go:172] (0xc002482840) Data frame received for 3 I0308 14:18:48.343486 6 log.go:172] (0xc0022fa820) (3) Data frame handling I0308 14:18:48.343841 6 log.go:172] (0xc002482840) Data frame received for 5 I0308 14:18:48.343860 6 log.go:172] (0xc002227040) (5) Data frame handling I0308 14:18:48.345431 6 log.go:172] (0xc002482840) Data frame received for 1 I0308 14:18:48.345449 6 log.go:172] (0xc002226fa0) (1) Data frame handling I0308 14:18:48.345460 6 log.go:172] (0xc002226fa0) (1) Data frame sent I0308 14:18:48.345503 6 log.go:172] (0xc002482840) (0xc002226fa0) Stream removed, broadcasting: 1 I0308 14:18:48.345572 6 log.go:172] (0xc002482840) Go away received I0308 14:18:48.345612 6 log.go:172] (0xc002482840) (0xc002226fa0) Stream removed, broadcasting: 1 I0308 14:18:48.345641 6 log.go:172] (0xc002482840) (0xc0022fa820) Stream removed, broadcasting: 3 I0308 14:18:48.345661 6 log.go:172] (0xc002482840) (0xc002227040) Stream removed, broadcasting: 5 Mar 8 14:18:48.345: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:18:48.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6057" for this suite. • [SLOW TEST:20.351 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4143,"failed":0} S ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:18:48.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8018 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8018;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8018 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8018;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8018.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8018.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8018.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8018.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8018.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8018.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8018.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8018.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8018.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8018.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8018.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.252.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.252.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.252.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.252.223_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8018 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8018;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8018 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8018;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8018.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8018.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8018.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8018.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8018.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8018.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8018.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8018.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8018.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8018.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8018.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8018.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.252.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.252.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.252.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.252.223_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 14:18:52.477: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.479: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.482: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.485: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.487: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.490: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.492: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.494: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.514: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.516: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.518: INFO: Unable to read jessie_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.520: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.522: INFO: Unable to read jessie_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.526: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.529: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.531: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:52.544: INFO: Lookups using dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8018 wheezy_tcp@dns-test-service.dns-8018 wheezy_udp@dns-test-service.dns-8018.svc wheezy_tcp@dns-test-service.dns-8018.svc wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8018 jessie_tcp@dns-test-service.dns-8018 jessie_udp@dns-test-service.dns-8018.svc jessie_tcp@dns-test-service.dns-8018.svc jessie_udp@_http._tcp.dns-test-service.dns-8018.svc jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc] Mar 8 14:18:57.572: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.582: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.586: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.589: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.591: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.594: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.597: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.600: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.620: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.623: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.626: INFO: Unable to read jessie_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.628: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.630: INFO: Unable to read jessie_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.633: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.635: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.638: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:18:57.653: INFO: Lookups using dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8018 wheezy_tcp@dns-test-service.dns-8018 wheezy_udp@dns-test-service.dns-8018.svc wheezy_tcp@dns-test-service.dns-8018.svc wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8018 jessie_tcp@dns-test-service.dns-8018 jessie_udp@dns-test-service.dns-8018.svc jessie_tcp@dns-test-service.dns-8018.svc jessie_udp@_http._tcp.dns-test-service.dns-8018.svc jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc] Mar 8 14:19:02.548: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.551: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.558: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.568: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.570: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.591: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.593: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.596: INFO: Unable to read jessie_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.598: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.601: INFO: Unable to read jessie_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.603: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.606: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.608: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:02.623: INFO: Lookups using dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8018 wheezy_tcp@dns-test-service.dns-8018 wheezy_udp@dns-test-service.dns-8018.svc wheezy_tcp@dns-test-service.dns-8018.svc wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8018 jessie_tcp@dns-test-service.dns-8018 jessie_udp@dns-test-service.dns-8018.svc jessie_tcp@dns-test-service.dns-8018.svc jessie_udp@_http._tcp.dns-test-service.dns-8018.svc jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc] Mar 8 14:19:07.549: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.553: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.559: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.566: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.570: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.573: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.595: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.598: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.601: INFO: Unable to read jessie_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.606: INFO: Unable to read jessie_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.609: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.615: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:07.636: INFO: Lookups using dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8018 wheezy_tcp@dns-test-service.dns-8018 wheezy_udp@dns-test-service.dns-8018.svc wheezy_tcp@dns-test-service.dns-8018.svc wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8018 jessie_tcp@dns-test-service.dns-8018 jessie_udp@dns-test-service.dns-8018.svc jessie_tcp@dns-test-service.dns-8018.svc jessie_udp@_http._tcp.dns-test-service.dns-8018.svc jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc] Mar 8 14:19:12.548: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.551: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.557: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.560: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.563: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.566: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.569: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.589: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.593: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.595: INFO: Unable to read jessie_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.597: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.600: INFO: Unable to read jessie_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.602: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.605: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.607: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:12.622: INFO: Lookups using dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8018 wheezy_tcp@dns-test-service.dns-8018 wheezy_udp@dns-test-service.dns-8018.svc wheezy_tcp@dns-test-service.dns-8018.svc wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8018 jessie_tcp@dns-test-service.dns-8018 jessie_udp@dns-test-service.dns-8018.svc jessie_tcp@dns-test-service.dns-8018.svc jessie_udp@_http._tcp.dns-test-service.dns-8018.svc jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc] Mar 8 14:19:17.548: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.551: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.556: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.559: INFO: Unable to read wheezy_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.561: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.564: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.566: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.583: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.585: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.588: INFO: Unable to read jessie_udp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.590: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018 from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.596: INFO: Unable to read jessie_udp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.599: INFO: Unable to read jessie_tcp@dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.602: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.605: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc from pod dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c: the server could not find the requested resource (get pods dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c) Mar 8 14:19:17.619: INFO: Lookups using dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8018 wheezy_tcp@dns-test-service.dns-8018 wheezy_udp@dns-test-service.dns-8018.svc wheezy_tcp@dns-test-service.dns-8018.svc wheezy_udp@_http._tcp.dns-test-service.dns-8018.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8018.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8018 jessie_tcp@dns-test-service.dns-8018 jessie_udp@dns-test-service.dns-8018.svc jessie_tcp@dns-test-service.dns-8018.svc jessie_udp@_http._tcp.dns-test-service.dns-8018.svc jessie_tcp@_http._tcp.dns-test-service.dns-8018.svc] Mar 8 14:19:22.630: INFO: DNS probes using dns-8018/dns-test-e6c84c20-b9df-4199-bdc3-a7c2375dc08c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:22.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8018" for this suite. • [SLOW TEST:34.488 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":259,"skipped":4144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:22.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 14:19:25.409: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fa132976-16ab-4d13-b0c4-e160aba9bc02" Mar 8 14:19:25.409: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fa132976-16ab-4d13-b0c4-e160aba9bc02" in namespace "pods-4131" to be "terminated due to deadline exceeded" Mar 8 14:19:25.428: INFO: Pod "pod-update-activedeadlineseconds-fa132976-16ab-4d13-b0c4-e160aba9bc02": Phase="Running", Reason="", readiness=true. Elapsed: 19.370823ms Mar 8 14:19:27.432: INFO: Pod "pod-update-activedeadlineseconds-fa132976-16ab-4d13-b0c4-e160aba9bc02": Phase="Running", Reason="", readiness=true. Elapsed: 2.023398538s Mar 8 14:19:29.436: INFO: Pod "pod-update-activedeadlineseconds-fa132976-16ab-4d13-b0c4-e160aba9bc02": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.026935254s Mar 8 14:19:29.436: INFO: Pod "pod-update-activedeadlineseconds-fa132976-16ab-4d13-b0c4-e160aba9bc02" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:29.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4131" for this suite. • [SLOW TEST:6.601 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:29.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 14:19:32.025: INFO: Successfully updated pod "pod-update-0a4a67c9-227e-4659-a273-6734eed1fd84" STEP: verifying the updated pod is in kubernetes Mar 8 14:19:32.035: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:32.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2273" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4258,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:32.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:19:32.112: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 14:19:35.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4095 create -f -' Mar 8 14:19:36.904: INFO: stderr: "" Mar 8 14:19:36.904: INFO: stdout: "e2e-test-crd-publish-openapi-7869-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 8 14:19:36.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4095 delete e2e-test-crd-publish-openapi-7869-crds test-cr' Mar 8 14:19:37.027: INFO: stderr: "" Mar 8 14:19:37.027: INFO: stdout: "e2e-test-crd-publish-openapi-7869-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 8 14:19:37.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4095 apply -f -' Mar 8 14:19:37.280: INFO: stderr: "" Mar 8 14:19:37.280: INFO: stdout: "e2e-test-crd-publish-openapi-7869-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 8 14:19:37.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4095 delete e2e-test-crd-publish-openapi-7869-crds test-cr' Mar 8 14:19:37.361: INFO: stderr: "" Mar 8 14:19:37.361: INFO: stdout: "e2e-test-crd-publish-openapi-7869-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 8 14:19:37.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7869-crds' Mar 8 14:19:37.560: INFO: stderr: "" Mar 8 14:19:37.560: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7869-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:40.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4095" for this suite. • [SLOW TEST:8.220 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":262,"skipped":4261,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:40.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 14:19:40.332: INFO: Waiting up to 5m0s for pod "pod-e4bbcb7d-271f-4c3d-bbc4-f985bb74a2a8" in namespace "emptydir-6488" to be "success or failure" Mar 8 14:19:40.343: INFO: Pod "pod-e4bbcb7d-271f-4c3d-bbc4-f985bb74a2a8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.765749ms Mar 8 14:19:42.346: INFO: Pod "pod-e4bbcb7d-271f-4c3d-bbc4-f985bb74a2a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014641975s STEP: Saw pod success Mar 8 14:19:42.347: INFO: Pod "pod-e4bbcb7d-271f-4c3d-bbc4-f985bb74a2a8" satisfied condition "success or failure" Mar 8 14:19:42.350: INFO: Trying to get logs from node kind-worker2 pod pod-e4bbcb7d-271f-4c3d-bbc4-f985bb74a2a8 container test-container: STEP: delete the pod Mar 8 14:19:42.382: INFO: Waiting for pod pod-e4bbcb7d-271f-4c3d-bbc4-f985bb74a2a8 to disappear Mar 8 14:19:42.391: INFO: Pod pod-e4bbcb7d-271f-4c3d-bbc4-f985bb74a2a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:42.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6488" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4263,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:42.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 14:19:42.464: INFO: Waiting up to 5m0s for pod "pod-d380f7b7-b99c-4830-b302-866aa67de245" in namespace "emptydir-9182" to be "success or failure" Mar 8 14:19:42.476: INFO: Pod "pod-d380f7b7-b99c-4830-b302-866aa67de245": Phase="Pending", Reason="", readiness=false. Elapsed: 11.178049ms Mar 8 14:19:44.480: INFO: Pod "pod-d380f7b7-b99c-4830-b302-866aa67de245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015206082s STEP: Saw pod success Mar 8 14:19:44.480: INFO: Pod "pod-d380f7b7-b99c-4830-b302-866aa67de245" satisfied condition "success or failure" Mar 8 14:19:44.483: INFO: Trying to get logs from node kind-worker pod pod-d380f7b7-b99c-4830-b302-866aa67de245 container test-container: STEP: delete the pod Mar 8 14:19:44.518: INFO: Waiting for pod pod-d380f7b7-b99c-4830-b302-866aa67de245 to disappear Mar 8 14:19:44.523: INFO: Pod pod-d380f7b7-b99c-4830-b302-866aa67de245 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:44.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9182" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4278,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:44.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:55.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9914" for this suite. • [SLOW TEST:11.104 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":265,"skipped":4288,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:55.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:55.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8385" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":266,"skipped":4293,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:55.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a6b8e1f6-ea24-40b9-9069-8f165f28cc86 STEP: Creating a pod to test consume configMaps Mar 8 14:19:55.813: INFO: Waiting up to 5m0s for pod "pod-configmaps-46db9db2-aab4-4dc9-a21e-47306e446e7c" in namespace "configmap-4937" to be "success or failure" Mar 8 14:19:55.820: INFO: Pod "pod-configmaps-46db9db2-aab4-4dc9-a21e-47306e446e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.432102ms Mar 8 14:19:57.824: INFO: Pod "pod-configmaps-46db9db2-aab4-4dc9-a21e-47306e446e7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011798202s STEP: Saw pod success Mar 8 14:19:57.824: INFO: Pod "pod-configmaps-46db9db2-aab4-4dc9-a21e-47306e446e7c" satisfied condition "success or failure" Mar 8 14:19:57.828: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-46db9db2-aab4-4dc9-a21e-47306e446e7c container configmap-volume-test: STEP: delete the pod Mar 8 14:19:57.862: INFO: Waiting for pod pod-configmaps-46db9db2-aab4-4dc9-a21e-47306e446e7c to disappear Mar 8 14:19:57.870: INFO: Pod pod-configmaps-46db9db2-aab4-4dc9-a21e-47306e446e7c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:57.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4937" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4303,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:19:57.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-c0c9249a-8326-4e66-8ecd-d21cdefdd249 STEP: Creating a pod to test consume secrets Mar 8 14:19:57.950: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0e1bc3c6-3d68-4751-a9db-6c34606d1839" in namespace "projected-2200" to be "success or failure" Mar 8 14:19:57.954: INFO: Pod "pod-projected-secrets-0e1bc3c6-3d68-4751-a9db-6c34606d1839": Phase="Pending", Reason="", readiness=false. Elapsed: 3.956659ms Mar 8 14:19:59.958: INFO: Pod "pod-projected-secrets-0e1bc3c6-3d68-4751-a9db-6c34606d1839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008207827s STEP: Saw pod success Mar 8 14:19:59.959: INFO: Pod "pod-projected-secrets-0e1bc3c6-3d68-4751-a9db-6c34606d1839" satisfied condition "success or failure" Mar 8 14:19:59.962: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-0e1bc3c6-3d68-4751-a9db-6c34606d1839 container projected-secret-volume-test: STEP: delete the pod Mar 8 14:19:59.980: INFO: Waiting for pod pod-projected-secrets-0e1bc3c6-3d68-4751-a9db-6c34606d1839 to disappear Mar 8 14:19:59.991: INFO: Pod pod-projected-secrets-0e1bc3c6-3d68-4751-a9db-6c34606d1839 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:19:59.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2200" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4314,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:20:00.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 8 14:20:00.070: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa5aef28-0136-4907-b28b-75bb504096ce" in namespace "downward-api-3588" to be "success or failure" Mar 8 14:20:00.085: INFO: Pod "downwardapi-volume-fa5aef28-0136-4907-b28b-75bb504096ce": Phase="Pending", Reason="", readiness=false. Elapsed: 14.58715ms Mar 8 14:20:02.089: INFO: Pod "downwardapi-volume-fa5aef28-0136-4907-b28b-75bb504096ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01868381s STEP: Saw pod success Mar 8 14:20:02.089: INFO: Pod "downwardapi-volume-fa5aef28-0136-4907-b28b-75bb504096ce" satisfied condition "success or failure" Mar 8 14:20:02.091: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-fa5aef28-0136-4907-b28b-75bb504096ce container client-container: STEP: delete the pod Mar 8 14:20:02.143: INFO: Waiting for pod downwardapi-volume-fa5aef28-0136-4907-b28b-75bb504096ce to disappear Mar 8 14:20:02.158: INFO: Pod downwardapi-volume-fa5aef28-0136-4907-b28b-75bb504096ce no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:20:02.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3588" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:20:02.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-0d02c16d-e7d1-460a-929c-b5446d1a5a75 STEP: Creating configMap with name cm-test-opt-upd-75c85ed6-9336-418f-b2dd-90d94bd24221 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0d02c16d-e7d1-460a-929c-b5446d1a5a75 STEP: Updating configmap cm-test-opt-upd-75c85ed6-9336-418f-b2dd-90d94bd24221 STEP: Creating configMap with name cm-test-opt-create-1662de91-1a3d-4e20-ad5d-571fc49cdd44 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:20:06.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7407" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4349,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:20:06.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 14:20:06.411: INFO: Waiting up to 5m0s for pod "pod-f4db5888-40e2-447a-bc2f-181c428e1bf9" in namespace "emptydir-4788" to be "success or failure" Mar 8 14:20:06.430: INFO: Pod "pod-f4db5888-40e2-447a-bc2f-181c428e1bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.579802ms Mar 8 14:20:08.434: INFO: Pod "pod-f4db5888-40e2-447a-bc2f-181c428e1bf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022562112s STEP: Saw pod success Mar 8 14:20:08.434: INFO: Pod "pod-f4db5888-40e2-447a-bc2f-181c428e1bf9" satisfied condition "success or failure" Mar 8 14:20:08.436: INFO: Trying to get logs from node kind-worker pod pod-f4db5888-40e2-447a-bc2f-181c428e1bf9 container test-container: STEP: delete the pod Mar 8 14:20:08.465: INFO: Waiting for pod pod-f4db5888-40e2-447a-bc2f-181c428e1bf9 to disappear Mar 8 14:20:08.470: INFO: Pod pod-f4db5888-40e2-447a-bc2f-181c428e1bf9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:20:08.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4788" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4363,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:20:08.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-3953 STEP: creating replication controller nodeport-test in namespace services-3953 I0308 14:20:08.608832 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3953, replica count: 2 I0308 14:20:11.659279 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 14:20:11.659: INFO: Creating new exec pod Mar 8 14:20:14.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3953 execpodkmbrg -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 8 14:20:14.920: INFO: stderr: "I0308 14:20:14.859940 3775 log.go:172] (0xc000119290) (0xc00091e0a0) Create stream\nI0308 14:20:14.859994 3775 log.go:172] (0xc000119290) (0xc00091e0a0) Stream added, broadcasting: 1\nI0308 14:20:14.862397 3775 log.go:172] (0xc000119290) Reply frame received for 1\nI0308 14:20:14.862438 3775 log.go:172] (0xc000119290) (0xc000b5a000) Create stream\nI0308 14:20:14.862455 3775 log.go:172] (0xc000119290) (0xc000b5a000) Stream added, broadcasting: 3\nI0308 14:20:14.863558 3775 log.go:172] (0xc000119290) Reply frame received for 3\nI0308 14:20:14.863601 3775 log.go:172] (0xc000119290) (0xc00091e140) Create stream\nI0308 14:20:14.863614 3775 log.go:172] (0xc000119290) (0xc00091e140) Stream added, broadcasting: 5\nI0308 14:20:14.864700 3775 log.go:172] (0xc000119290) Reply frame received for 5\nI0308 14:20:14.914065 3775 log.go:172] (0xc000119290) Data frame received for 5\nI0308 14:20:14.914097 3775 log.go:172] (0xc00091e140) (5) Data frame handling\nI0308 14:20:14.914142 3775 log.go:172] (0xc00091e140) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0308 14:20:14.915407 3775 log.go:172] (0xc000119290) Data frame received for 5\nI0308 14:20:14.915434 3775 log.go:172] (0xc00091e140) (5) Data frame handling\nI0308 14:20:14.915456 3775 log.go:172] (0xc00091e140) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0308 14:20:14.915747 3775 log.go:172] (0xc000119290) Data frame received for 3\nI0308 14:20:14.915771 3775 log.go:172] (0xc000b5a000) (3) Data frame handling\nI0308 14:20:14.915792 3775 log.go:172] (0xc000119290) Data frame received for 5\nI0308 14:20:14.915804 3775 log.go:172] (0xc00091e140) (5) Data frame handling\nI0308 14:20:14.917527 3775 log.go:172] (0xc000119290) Data frame received for 1\nI0308 14:20:14.917548 3775 log.go:172] (0xc00091e0a0) (1) Data frame handling\nI0308 14:20:14.917558 3775 log.go:172] (0xc00091e0a0) (1) Data frame sent\nI0308 14:20:14.917569 3775 log.go:172] (0xc000119290) (0xc00091e0a0) Stream removed, broadcasting: 1\nI0308 14:20:14.917587 3775 log.go:172] (0xc000119290) Go away received\nI0308 14:20:14.917925 3775 log.go:172] (0xc000119290) (0xc00091e0a0) Stream removed, broadcasting: 1\nI0308 14:20:14.917948 3775 log.go:172] (0xc000119290) (0xc000b5a000) Stream removed, broadcasting: 3\nI0308 14:20:14.917959 3775 log.go:172] (0xc000119290) (0xc00091e140) Stream removed, broadcasting: 5\n" Mar 8 14:20:14.920: INFO: stdout: "" Mar 8 14:20:14.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3953 execpodkmbrg -- /bin/sh -x -c nc -zv -t -w 2 10.96.145.216 80' Mar 8 14:20:15.117: INFO: stderr: "I0308 14:20:15.055022 3797 log.go:172] (0xc0000f4580) (0xc0004415e0) Create stream\nI0308 14:20:15.055059 3797 log.go:172] (0xc0000f4580) (0xc0004415e0) Stream added, broadcasting: 1\nI0308 14:20:15.057000 3797 log.go:172] (0xc0000f4580) Reply frame received for 1\nI0308 14:20:15.057028 3797 log.go:172] (0xc0000f4580) (0xc00067bb80) Create stream\nI0308 14:20:15.057039 3797 log.go:172] (0xc0000f4580) (0xc00067bb80) Stream added, broadcasting: 3\nI0308 14:20:15.057650 3797 log.go:172] (0xc0000f4580) Reply frame received for 3\nI0308 14:20:15.057670 3797 log.go:172] (0xc0000f4580) (0xc000908000) Create stream\nI0308 14:20:15.057675 3797 log.go:172] (0xc0000f4580) (0xc000908000) Stream added, broadcasting: 5\nI0308 14:20:15.058374 3797 log.go:172] (0xc0000f4580) Reply frame received for 5\nI0308 14:20:15.112428 3797 log.go:172] (0xc0000f4580) Data frame received for 3\nI0308 14:20:15.112448 3797 log.go:172] (0xc00067bb80) (3) Data frame handling\nI0308 14:20:15.112468 3797 log.go:172] (0xc0000f4580) Data frame received for 5\nI0308 14:20:15.112477 3797 log.go:172] (0xc000908000) (5) Data frame handling\nI0308 14:20:15.112486 3797 log.go:172] (0xc000908000) (5) Data frame sent\nI0308 14:20:15.112495 3797 log.go:172] (0xc0000f4580) Data frame received for 5\nI0308 14:20:15.112503 3797 log.go:172] (0xc000908000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.145.216 80\nConnection to 10.96.145.216 80 port [tcp/http] succeeded!\nI0308 14:20:15.113830 3797 log.go:172] (0xc0000f4580) Data frame received for 1\nI0308 14:20:15.113862 3797 log.go:172] (0xc0004415e0) (1) Data frame handling\nI0308 14:20:15.113892 3797 log.go:172] (0xc0004415e0) (1) Data frame sent\nI0308 14:20:15.114246 3797 log.go:172] (0xc0000f4580) (0xc0004415e0) Stream removed, broadcasting: 1\nI0308 14:20:15.114277 3797 log.go:172] (0xc0000f4580) Go away received\nI0308 14:20:15.114578 3797 log.go:172] (0xc0000f4580) (0xc0004415e0) Stream removed, broadcasting: 1\nI0308 14:20:15.114598 3797 log.go:172] (0xc0000f4580) (0xc00067bb80) Stream removed, broadcasting: 3\nI0308 14:20:15.114621 3797 log.go:172] (0xc0000f4580) (0xc000908000) Stream removed, broadcasting: 5\n" Mar 8 14:20:15.117: INFO: stdout: "" Mar 8 14:20:15.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3953 execpodkmbrg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.4 31446' Mar 8 14:20:15.303: INFO: stderr: "I0308 14:20:15.241744 3819 log.go:172] (0xc00041e6e0) (0xc000940000) Create stream\nI0308 14:20:15.241787 3819 log.go:172] (0xc00041e6e0) (0xc000940000) Stream added, broadcasting: 1\nI0308 14:20:15.243714 3819 log.go:172] (0xc00041e6e0) Reply frame received for 1\nI0308 14:20:15.243747 3819 log.go:172] (0xc00041e6e0) (0xc0006dba40) Create stream\nI0308 14:20:15.243759 3819 log.go:172] (0xc00041e6e0) (0xc0006dba40) Stream added, broadcasting: 3\nI0308 14:20:15.244533 3819 log.go:172] (0xc00041e6e0) Reply frame received for 3\nI0308 14:20:15.244556 3819 log.go:172] (0xc00041e6e0) (0xc0006dbc20) Create stream\nI0308 14:20:15.244568 3819 log.go:172] (0xc00041e6e0) (0xc0006dbc20) Stream added, broadcasting: 5\nI0308 14:20:15.245353 3819 log.go:172] (0xc00041e6e0) Reply frame received for 5\nI0308 14:20:15.299763 3819 log.go:172] (0xc00041e6e0) Data frame received for 5\nI0308 14:20:15.299794 3819 log.go:172] (0xc0006dbc20) (5) Data frame handling\nI0308 14:20:15.299811 3819 log.go:172] (0xc0006dbc20) (5) Data frame sent\nI0308 14:20:15.299825 3819 log.go:172] (0xc00041e6e0) Data frame received for 5\nI0308 14:20:15.299832 3819 log.go:172] (0xc0006dbc20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.4 31446\nConnection to 172.17.0.4 31446 port [tcp/31446] succeeded!\nI0308 14:20:15.299851 3819 log.go:172] (0xc00041e6e0) Data frame received for 3\nI0308 14:20:15.299858 3819 log.go:172] (0xc0006dba40) (3) Data frame handling\nI0308 14:20:15.301080 3819 log.go:172] (0xc00041e6e0) Data frame received for 1\nI0308 14:20:15.301099 3819 log.go:172] (0xc000940000) (1) Data frame handling\nI0308 14:20:15.301115 3819 log.go:172] (0xc000940000) (1) Data frame sent\nI0308 14:20:15.301134 3819 log.go:172] (0xc00041e6e0) (0xc000940000) Stream removed, broadcasting: 1\nI0308 14:20:15.301155 3819 log.go:172] (0xc00041e6e0) Go away received\nI0308 14:20:15.301372 3819 log.go:172] (0xc00041e6e0) (0xc000940000) Stream removed, broadcasting: 1\nI0308 14:20:15.301384 3819 log.go:172] (0xc00041e6e0) (0xc0006dba40) Stream removed, broadcasting: 3\nI0308 14:20:15.301390 3819 log.go:172] (0xc00041e6e0) (0xc0006dbc20) Stream removed, broadcasting: 5\n" Mar 8 14:20:15.303: INFO: stdout: "" Mar 8 14:20:15.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3953 execpodkmbrg -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.5 31446' Mar 8 14:20:15.465: INFO: stderr: "I0308 14:20:15.399537 3839 log.go:172] (0xc000baca50) (0xc000ab20a0) Create stream\nI0308 14:20:15.399568 3839 log.go:172] (0xc000baca50) (0xc000ab20a0) Stream added, broadcasting: 1\nI0308 14:20:15.401661 3839 log.go:172] (0xc000baca50) Reply frame received for 1\nI0308 14:20:15.401688 3839 log.go:172] (0xc000baca50) (0xc000667b80) Create stream\nI0308 14:20:15.401695 3839 log.go:172] (0xc000baca50) (0xc000667b80) Stream added, broadcasting: 3\nI0308 14:20:15.402328 3839 log.go:172] (0xc000baca50) Reply frame received for 3\nI0308 14:20:15.402348 3839 log.go:172] (0xc000baca50) (0xc000667c20) Create stream\nI0308 14:20:15.402353 3839 log.go:172] (0xc000baca50) (0xc000667c20) Stream added, broadcasting: 5\nI0308 14:20:15.402939 3839 log.go:172] (0xc000baca50) Reply frame received for 5\nI0308 14:20:15.460631 3839 log.go:172] (0xc000baca50) Data frame received for 5\nI0308 14:20:15.460652 3839 log.go:172] (0xc000667c20) (5) Data frame handling\nI0308 14:20:15.460663 3839 log.go:172] (0xc000667c20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.5 31446\nConnection to 172.17.0.5 31446 port [tcp/31446] succeeded!\nI0308 14:20:15.461143 3839 log.go:172] (0xc000baca50) Data frame received for 5\nI0308 14:20:15.461171 3839 log.go:172] (0xc000667c20) (5) Data frame handling\nI0308 14:20:15.461191 3839 log.go:172] (0xc000baca50) Data frame received for 3\nI0308 14:20:15.461202 3839 log.go:172] (0xc000667b80) (3) Data frame handling\nI0308 14:20:15.462566 3839 log.go:172] (0xc000baca50) Data frame received for 1\nI0308 14:20:15.462589 3839 log.go:172] (0xc000ab20a0) (1) Data frame handling\nI0308 14:20:15.462611 3839 log.go:172] (0xc000ab20a0) (1) Data frame sent\nI0308 14:20:15.462800 3839 log.go:172] (0xc000baca50) (0xc000ab20a0) Stream removed, broadcasting: 1\nI0308 14:20:15.462832 3839 log.go:172] (0xc000baca50) Go away received\nI0308 14:20:15.463149 3839 log.go:172] (0xc000baca50) (0xc000ab20a0) Stream removed, broadcasting: 1\nI0308 14:20:15.463169 3839 log.go:172] (0xc000baca50) (0xc000667b80) Stream removed, broadcasting: 3\nI0308 14:20:15.463177 3839 log.go:172] (0xc000baca50) (0xc000667c20) Stream removed, broadcasting: 5\n" Mar 8 14:20:15.465: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:20:15.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3953" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:6.975 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":272,"skipped":4365,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:20:15.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-631ce591-5593-44c4-8369-ec4b2f2f3634 STEP: Creating a pod to test consume secrets Mar 8 14:20:15.549: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-453021a3-b7c9-4f2c-8006-93c8ab43eabd" in namespace "projected-2664" to be "success or failure" Mar 8 14:20:15.554: INFO: Pod "pod-projected-secrets-453021a3-b7c9-4f2c-8006-93c8ab43eabd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.825617ms Mar 8 14:20:17.558: INFO: Pod "pod-projected-secrets-453021a3-b7c9-4f2c-8006-93c8ab43eabd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008344236s STEP: Saw pod success Mar 8 14:20:17.558: INFO: Pod "pod-projected-secrets-453021a3-b7c9-4f2c-8006-93c8ab43eabd" satisfied condition "success or failure" Mar 8 14:20:17.566: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-453021a3-b7c9-4f2c-8006-93c8ab43eabd container projected-secret-volume-test: STEP: delete the pod Mar 8 14:20:17.610: INFO: Waiting for pod pod-projected-secrets-453021a3-b7c9-4f2c-8006-93c8ab43eabd to disappear Mar 8 14:20:17.614: INFO: Pod pod-projected-secrets-453021a3-b7c9-4f2c-8006-93c8ab43eabd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:20:17.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2664" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:20:17.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 14:20:21.711: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 14:20:21.715: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 14:20:23.715: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 14:20:23.719: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 14:20:25.715: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 14:20:25.719: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 14:20:27.715: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 14:20:27.719: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 14:20:29.715: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 14:20:29.718: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:20:29.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3960" for this suite. • [SLOW TEST:12.109 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:20:29.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8956.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8956.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8956.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8956.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8956.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8956.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.130.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.130.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.130.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.130.251_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8956.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8956.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8956.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8956.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8956.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8956.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8956.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8956.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8956.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.130.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.130.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.130.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.130.251_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 14:20:33.865: INFO: Unable to read wheezy_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:33.868: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:33.871: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:33.874: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:33.905: INFO: Unable to read jessie_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:33.909: INFO: Unable to read jessie_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:33.913: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:33.917: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:33.937: INFO: Lookups using dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893 failed for: [wheezy_udp@dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_udp@dns-test-service.dns-8956.svc.cluster.local jessie_tcp@dns-test-service.dns-8956.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local] Mar 8 14:20:38.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:38.951: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:38.955: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:38.958: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:38.990: INFO: Unable to read jessie_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:38.993: INFO: Unable to read jessie_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:38.996: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:38.999: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:39.031: INFO: Lookups using dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893 failed for: [wheezy_udp@dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_udp@dns-test-service.dns-8956.svc.cluster.local jessie_tcp@dns-test-service.dns-8956.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local] Mar 8 14:20:43.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:43.946: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:43.950: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:43.953: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:43.974: INFO: Unable to read jessie_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:43.977: INFO: Unable to read jessie_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:43.980: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:43.982: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:43.999: INFO: Lookups using dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893 failed for: [wheezy_udp@dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_udp@dns-test-service.dns-8956.svc.cluster.local jessie_tcp@dns-test-service.dns-8956.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local] Mar 8 14:20:48.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:48.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:48.948: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:48.951: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:48.974: INFO: Unable to read jessie_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:48.977: INFO: Unable to read jessie_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:48.980: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:48.983: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:49.000: INFO: Lookups using dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893 failed for: [wheezy_udp@dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_udp@dns-test-service.dns-8956.svc.cluster.local jessie_tcp@dns-test-service.dns-8956.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local] Mar 8 14:20:53.941: INFO: Unable to read wheezy_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:53.944: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:53.947: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:53.950: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:53.970: INFO: Unable to read jessie_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:53.973: INFO: Unable to read jessie_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:53.975: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:53.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:53.994: INFO: Lookups using dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893 failed for: [wheezy_udp@dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_udp@dns-test-service.dns-8956.svc.cluster.local jessie_tcp@dns-test-service.dns-8956.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local] Mar 8 14:20:58.941: INFO: Unable to read wheezy_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:58.944: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:58.947: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:58.950: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:58.975: INFO: Unable to read jessie_udp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:58.979: INFO: Unable to read jessie_tcp@dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:58.982: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:58.985: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local from pod dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893: the server could not find the requested resource (get pods dns-test-b7032bf9-000c-4028-a8d1-ae8444183893) Mar 8 14:20:59.003: INFO: Lookups using dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893 failed for: [wheezy_udp@dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@dns-test-service.dns-8956.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_udp@dns-test-service.dns-8956.svc.cluster.local jessie_tcp@dns-test-service.dns-8956.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8956.svc.cluster.local] Mar 8 14:21:03.997: INFO: DNS probes using dns-8956/dns-test-b7032bf9-000c-4028-a8d1-ae8444183893 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:21:04.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8956" for this suite. • [SLOW TEST:34.437 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":275,"skipped":4448,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:21:04.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 8 14:21:04.207: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:21:08.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6995" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":276,"skipped":4470,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:21:08.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 8 14:21:08.246: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 8 14:21:11.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5345 create -f -' Mar 8 14:21:13.428: INFO: stderr: "" Mar 8 14:21:13.428: INFO: stdout: "e2e-test-crd-publish-openapi-8286-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 14:21:13.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5345 delete e2e-test-crd-publish-openapi-8286-crds test-cr' Mar 8 14:21:13.546: INFO: stderr: "" Mar 8 14:21:13.546: INFO: stdout: "e2e-test-crd-publish-openapi-8286-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 8 14:21:13.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5345 apply -f -' Mar 8 14:21:13.789: INFO: stderr: "" Mar 8 14:21:13.789: INFO: stdout: "e2e-test-crd-publish-openapi-8286-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 8 14:21:13.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5345 delete e2e-test-crd-publish-openapi-8286-crds test-cr' Mar 8 14:21:13.892: INFO: stderr: "" Mar 8 14:21:13.892: INFO: stdout: "e2e-test-crd-publish-openapi-8286-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 8 14:21:13.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8286-crds' Mar 8 14:21:14.170: INFO: stderr: "" Mar 8 14:21:14.170: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8286-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:21:16.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5345" for this suite. • [SLOW TEST:7.998 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":277,"skipped":4478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 8 14:21:16.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-xvgz STEP: Creating a pod to test atomic-volume-subpath Mar 8 14:21:16.301: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xvgz" in namespace "subpath-9474" to be "success or failure" Mar 8 14:21:16.309: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237644ms Mar 8 14:21:18.346: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 2.044887321s Mar 8 14:21:20.350: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 4.04888177s Mar 8 14:21:22.354: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 6.052618738s Mar 8 14:21:24.357: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 8.056345873s Mar 8 14:21:26.361: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 10.060154351s Mar 8 14:21:28.365: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 12.064222749s Mar 8 14:21:30.369: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 14.0682473s Mar 8 14:21:32.373: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 16.07216996s Mar 8 14:21:34.378: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 18.076934024s Mar 8 14:21:36.382: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Running", Reason="", readiness=true. Elapsed: 20.081221625s Mar 8 14:21:38.385: INFO: Pod "pod-subpath-test-secret-xvgz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.084453317s STEP: Saw pod success Mar 8 14:21:38.385: INFO: Pod "pod-subpath-test-secret-xvgz" satisfied condition "success or failure" Mar 8 14:21:38.388: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-secret-xvgz container test-container-subpath-secret-xvgz: STEP: delete the pod Mar 8 14:21:38.412: INFO: Waiting for pod pod-subpath-test-secret-xvgz to disappear Mar 8 14:21:38.423: INFO: Pod pod-subpath-test-secret-xvgz no longer exists STEP: Deleting pod pod-subpath-test-secret-xvgz Mar 8 14:21:38.423: INFO: Deleting pod "pod-subpath-test-secret-xvgz" in namespace "subpath-9474" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 8 14:21:38.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9474" for this suite. • [SLOW TEST:22.220 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":278,"skipped":4507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 8 14:21:38.432: INFO: Running AfterSuite actions on all nodes Mar 8 14:21:38.432: INFO: Running AfterSuite actions on node 1 Mar 8 14:21:38.432: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0} Ran 278 of 4814 Specs in 3748.700 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped PASS