I1116 09:02:50.588071 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1116 09:02:50.588316 7 e2e.go:129] Starting e2e run "c575447e-156c-40dd-8db6-565ac96c74c4" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1605517369 - Will randomize all specs Will run 303 of 5234 specs Nov 16 09:02:50.649: INFO: >>> kubeConfig: /root/.kube/config Nov 16 09:02:50.652: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 16 09:02:50.672: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 16 09:02:50.706: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 16 09:02:50.706: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Nov 16 09:02:50.706: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 16 09:02:50.715: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Nov 16 09:02:50.715: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 16 09:02:50.715: INFO: e2e test version: v1.19.5-rc.0 Nov 16 09:02:50.716: INFO: kube-apiserver version: v1.19.0 Nov 16 09:02:50.716: INFO: >>> kubeConfig: /root/.kube/config Nov 16 09:02:50.721: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:02:50.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Nov 16 09:02:50.762: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Nov 16 09:02:55.330: INFO: Successfully updated pod "annotationupdatea4205ea0-aa65-47c6-bbc8-c4b6f493f0ff" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:02:59.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8506" for this suite. • [SLOW TEST:8.658 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:02:59.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-fl4h STEP: Creating a pod to test atomic-volume-subpath Nov 16 09:02:59.486: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fl4h" in namespace "subpath-8914" to be "Succeeded or Failed" Nov 16 09:02:59.503: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Pending", Reason="", readiness=false. Elapsed: 16.300369ms Nov 16 09:03:01.674: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18745786s Nov 16 09:03:03.679: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 4.192314528s Nov 16 09:03:05.683: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 6.196937135s Nov 16 09:03:07.687: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 8.201201193s Nov 16 09:03:09.692: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 10.206005552s Nov 16 09:03:11.697: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 12.211161759s Nov 16 09:03:13.702: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 14.215563046s Nov 16 09:03:15.706: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 16.2195171s Nov 16 09:03:17.711: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 18.225190455s Nov 16 09:03:19.717: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 20.231090626s Nov 16 09:03:21.723: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Running", Reason="", readiness=true. Elapsed: 22.236366924s Nov 16 09:03:23.727: INFO: Pod "pod-subpath-test-projected-fl4h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.240549444s STEP: Saw pod success Nov 16 09:03:23.727: INFO: Pod "pod-subpath-test-projected-fl4h" satisfied condition "Succeeded or Failed" Nov 16 09:03:23.730: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-fl4h container test-container-subpath-projected-fl4h: STEP: delete the pod Nov 16 09:03:23.796: INFO: Waiting for pod pod-subpath-test-projected-fl4h to disappear Nov 16 09:03:23.805: INFO: Pod pod-subpath-test-projected-fl4h no longer exists STEP: Deleting pod pod-subpath-test-projected-fl4h Nov 16 09:03:23.805: INFO: Deleting pod "pod-subpath-test-projected-fl4h" in namespace "subpath-8914" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:03:23.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8914" for this suite. • [SLOW TEST:24.436 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":2,"skipped":81,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:03:23.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Nov 16 09:03:23.864: INFO: PodSpec: initContainers in spec.initContainers Nov 16 09:04:18.363: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e47729ff-4b72-4f94-9e01-fb765ff8dd0b", GenerateName:"", Namespace:"init-container-8254", SelfLink:"/api/v1/namespaces/init-container-8254/pods/pod-init-e47729ff-4b72-4f94-9e01-fb765ff8dd0b", UID:"975a3c89-25be-4c52-8c42-5d6153b75c38", ResourceVersion:"9767427", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63741114203, loc:(*time.Location)(0x77108c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"864598566"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0037b0300), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0037b0320)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0037b0340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0037b0360)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nxqs2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00378e800), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nxqs2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nxqs2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nxqs2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003635fc8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001232620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003cc8050)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003cc8070)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003cc8078), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003cc807c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00364b7a0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114204, loc:(*time.Location)(0x77108c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114204, loc:(*time.Location)(0x77108c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114204, loc:(*time.Location)(0x77108c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114203, loc:(*time.Location)(0x77108c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.15", PodIP:"10.244.2.188", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.188"}}, StartTime:(*v1.Time)(0xc0037b0380), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001232700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001232770)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1b506f68dc62c247a2c4b319c0244e9bc6fb7fe80789766811ddfe6be84fa117", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037b03c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037b03a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc003cc80ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:04:18.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8254" for this suite. • [SLOW TEST:54.643 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":3,"skipped":94,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:04:18.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:04:31.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1363" for this suite. • [SLOW TEST:13.229 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":4,"skipped":98,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:04:31.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Nov 16 09:04:37.091: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:04:38.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2399" for this suite. • [SLOW TEST:6.442 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":5,"skipped":103,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:04:38.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Nov 16 09:04:38.311: INFO: Waiting up to 5m0s for pod "pod-aaa3cc4e-01a4-443a-b1a6-79aafbb860e1" in namespace "emptydir-5399" to be "Succeeded or Failed" Nov 16 09:04:38.330: INFO: Pod "pod-aaa3cc4e-01a4-443a-b1a6-79aafbb860e1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.078575ms Nov 16 09:04:40.335: INFO: Pod "pod-aaa3cc4e-01a4-443a-b1a6-79aafbb860e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023923614s Nov 16 09:04:42.340: INFO: Pod "pod-aaa3cc4e-01a4-443a-b1a6-79aafbb860e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029450747s STEP: Saw pod success Nov 16 09:04:42.340: INFO: Pod "pod-aaa3cc4e-01a4-443a-b1a6-79aafbb860e1" satisfied condition "Succeeded or Failed" Nov 16 09:04:42.343: INFO: Trying to get logs from node latest-worker pod pod-aaa3cc4e-01a4-443a-b1a6-79aafbb860e1 container test-container: STEP: delete the pod Nov 16 09:04:42.365: INFO: Waiting for pod pod-aaa3cc4e-01a4-443a-b1a6-79aafbb860e1 to disappear Nov 16 09:04:42.369: INFO: Pod pod-aaa3cc4e-01a4-443a-b1a6-79aafbb860e1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:04:42.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5399" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":119,"failed":0} SSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:04:42.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4426, will wait for the garbage collector to delete the pods Nov 16 09:04:48.523: INFO: Deleting Job.batch foo took: 6.213491ms Nov 16 09:04:48.923: INFO: Terminating Job.batch foo pods took: 400.184737ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:05:35.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4426" for this suite. • [SLOW TEST:53.357 seconds] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":7,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:05:35.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Nov 16 09:05:36.349: INFO: Pod name wrapped-volume-race-4e619e94-7ca6-4834-aad6-300f122ad2c3: Found 0 pods out of 5 Nov 16 09:05:41.363: INFO: Pod name wrapped-volume-race-4e619e94-7ca6-4834-aad6-300f122ad2c3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4e619e94-7ca6-4834-aad6-300f122ad2c3 in namespace emptydir-wrapper-2208, will wait for the garbage collector to delete the pods Nov 16 09:05:55.450: INFO: Deleting ReplicationController wrapped-volume-race-4e619e94-7ca6-4834-aad6-300f122ad2c3 took: 7.710275ms Nov 16 09:05:55.850: INFO: Terminating ReplicationController wrapped-volume-race-4e619e94-7ca6-4834-aad6-300f122ad2c3 pods took: 400.226745ms STEP: Creating RC which spawns configmap-volume pods Nov 16 09:06:06.022: INFO: Pod name wrapped-volume-race-5acdedd2-f6d9-429d-a97e-02a9ae15c07c: Found 0 pods out of 5 Nov 16 09:06:11.040: INFO: Pod name wrapped-volume-race-5acdedd2-f6d9-429d-a97e-02a9ae15c07c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5acdedd2-f6d9-429d-a97e-02a9ae15c07c in namespace emptydir-wrapper-2208, will wait for the garbage collector to delete the pods Nov 16 09:06:25.259: INFO: Deleting ReplicationController wrapped-volume-race-5acdedd2-f6d9-429d-a97e-02a9ae15c07c took: 30.527078ms Nov 16 09:06:25.660: INFO: Terminating ReplicationController wrapped-volume-race-5acdedd2-f6d9-429d-a97e-02a9ae15c07c pods took: 401.039743ms STEP: Creating RC which spawns configmap-volume pods Nov 16 09:06:35.805: INFO: Pod name wrapped-volume-race-0ae490f5-f2bb-4eb3-9d70-c61fb56cf863: Found 0 pods out of 5 Nov 16 09:06:40.812: INFO: Pod name wrapped-volume-race-0ae490f5-f2bb-4eb3-9d70-c61fb56cf863: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0ae490f5-f2bb-4eb3-9d70-c61fb56cf863 in namespace emptydir-wrapper-2208, will wait for the garbage collector to delete the pods Nov 16 09:06:54.920: INFO: Deleting ReplicationController wrapped-volume-race-0ae490f5-f2bb-4eb3-9d70-c61fb56cf863 took: 7.937231ms Nov 16 09:06:55.420: INFO: Terminating ReplicationController wrapped-volume-race-0ae490f5-f2bb-4eb3-9d70-c61fb56cf863 pods took: 500.201067ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:06.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2208" for this suite. • [SLOW TEST:90.918 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":8,"skipped":144,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:06.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-eb9049d7-97d4-4cf4-8176-d7ca2b3479f2 STEP: Creating a pod to test consume configMaps Nov 16 09:07:06.761: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7856fce7-c617-4a88-922c-23a8d5fec641" in namespace "projected-3873" to be "Succeeded or Failed" Nov 16 09:07:06.780: INFO: Pod "pod-projected-configmaps-7856fce7-c617-4a88-922c-23a8d5fec641": Phase="Pending", Reason="", readiness=false. Elapsed: 18.836697ms Nov 16 09:07:08.784: INFO: Pod "pod-projected-configmaps-7856fce7-c617-4a88-922c-23a8d5fec641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022774018s Nov 16 09:07:10.789: INFO: Pod "pod-projected-configmaps-7856fce7-c617-4a88-922c-23a8d5fec641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027387289s STEP: Saw pod success Nov 16 09:07:10.789: INFO: Pod "pod-projected-configmaps-7856fce7-c617-4a88-922c-23a8d5fec641" satisfied condition "Succeeded or Failed" Nov 16 09:07:10.793: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7856fce7-c617-4a88-922c-23a8d5fec641 container projected-configmap-volume-test: STEP: delete the pod Nov 16 09:07:10.836: INFO: Waiting for pod pod-projected-configmaps-7856fce7-c617-4a88-922c-23a8d5fec641 to disappear Nov 16 09:07:10.847: INFO: Pod pod-projected-configmaps-7856fce7-c617-4a88-922c-23a8d5fec641 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:10.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3873" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":146,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:10.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Nov 16 09:07:10.983: INFO: Waiting up to 5m0s for pod "var-expansion-46e883a2-562b-478f-986a-0120c247642c" in namespace "var-expansion-3798" to be "Succeeded or Failed" Nov 16 09:07:10.991: INFO: Pod "var-expansion-46e883a2-562b-478f-986a-0120c247642c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.840654ms Nov 16 09:07:13.073: INFO: Pod "var-expansion-46e883a2-562b-478f-986a-0120c247642c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089854165s Nov 16 09:07:15.092: INFO: Pod "var-expansion-46e883a2-562b-478f-986a-0120c247642c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109290872s STEP: Saw pod success Nov 16 09:07:15.092: INFO: Pod "var-expansion-46e883a2-562b-478f-986a-0120c247642c" satisfied condition "Succeeded or Failed" Nov 16 09:07:15.114: INFO: Trying to get logs from node latest-worker pod var-expansion-46e883a2-562b-478f-986a-0120c247642c container dapi-container: STEP: delete the pod Nov 16 09:07:15.194: INFO: Waiting for pod var-expansion-46e883a2-562b-478f-986a-0120c247642c to disappear Nov 16 09:07:15.200: INFO: Pod var-expansion-46e883a2-562b-478f-986a-0120c247642c no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:15.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3798" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":10,"skipped":164,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:15.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:20.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-518" for this suite. • [SLOW TEST:5.530 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":11,"skipped":166,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:20.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Nov 16 09:07:20.806: INFO: created test-podtemplate-1 Nov 16 09:07:20.812: INFO: created test-podtemplate-2 Nov 16 09:07:20.881: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Nov 16 09:07:20.885: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Nov 16 09:07:20.922: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:20.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4720" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":12,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:20.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:32.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1442" for this suite. • [SLOW TEST:11.269 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":13,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:32.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 16 09:07:32.295: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 16 09:07:32.310: INFO: Waiting for terminating namespaces to be deleted... Nov 16 09:07:32.313: INFO: Logging pods the apiserver thinks is on node latest-worker before test Nov 16 09:07:32.317: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 09:07:32.317: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 09:07:32.317: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 09:07:32.317: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 09:07:32.317: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Nov 16 09:07:32.321: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Nov 16 09:07:32.321: INFO: Container coredns ready: true, restart count 0 Nov 16 09:07:32.321: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 09:07:32.321: INFO: Container coredns ready: true, restart count 0 Nov 16 09:07:32.321: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 09:07:32.321: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 09:07:32.321: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 09:07:32.321: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 09:07:32.321: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 09:07:32.321: INFO: Container local-path-provisioner ready: true, restart count 1 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f9a35d1c-6583-458c-93e0-a701155a3284 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-f9a35d1c-6583-458c-93e0-a701155a3284 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f9a35d1c-6583-458c-93e0-a701155a3284 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:49.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1019" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.865 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":14,"skipped":274,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:49.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:07:49.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b23e55a8-23f9-4877-b4b8-45b791be52fd" in namespace "projected-7136" to be "Succeeded or Failed" Nov 16 09:07:49.223: INFO: Pod "downwardapi-volume-b23e55a8-23f9-4877-b4b8-45b791be52fd": Phase="Pending", Reason="", readiness=false. Elapsed: 85.518745ms Nov 16 09:07:51.227: INFO: Pod "downwardapi-volume-b23e55a8-23f9-4877-b4b8-45b791be52fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089151881s Nov 16 09:07:53.230: INFO: Pod "downwardapi-volume-b23e55a8-23f9-4877-b4b8-45b791be52fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092973397s STEP: Saw pod success Nov 16 09:07:53.231: INFO: Pod "downwardapi-volume-b23e55a8-23f9-4877-b4b8-45b791be52fd" satisfied condition "Succeeded or Failed" Nov 16 09:07:53.235: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b23e55a8-23f9-4877-b4b8-45b791be52fd container client-container: STEP: delete the pod Nov 16 09:07:53.267: INFO: Waiting for pod downwardapi-volume-b23e55a8-23f9-4877-b4b8-45b791be52fd to disappear Nov 16 09:07:53.286: INFO: Pod downwardapi-volume-b23e55a8-23f9-4877-b4b8-45b791be52fd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:53.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7136" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":15,"skipped":276,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:53.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4193/configmap-test-5dcf1c35-c5d1-44dc-9c12-15723931443f STEP: Creating a pod to test consume configMaps Nov 16 09:07:53.387: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59" in namespace "configmap-4193" to be "Succeeded or Failed" Nov 16 09:07:53.400: INFO: Pod "pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59": Phase="Pending", Reason="", readiness=false. Elapsed: 12.442906ms Nov 16 09:07:55.594: INFO: Pod "pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207204293s Nov 16 09:07:57.597: INFO: Pod "pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210236663s Nov 16 09:07:59.601: INFO: Pod "pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.214227044s STEP: Saw pod success Nov 16 09:07:59.601: INFO: Pod "pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59" satisfied condition "Succeeded or Failed" Nov 16 09:07:59.604: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59 container env-test: STEP: delete the pod Nov 16 09:07:59.639: INFO: Waiting for pod pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59 to disappear Nov 16 09:07:59.644: INFO: Pod pod-configmaps-f8e11310-6634-413d-8186-08aa3659aa59 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:59.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4193" for this suite. • [SLOW TEST:6.355 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":277,"failed":0} [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:59.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:07:59.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-717" for this suite. STEP: Destroying namespace "nspatchtest-1b0a5dbc-b4cd-4b9f-a599-c799def80392-1389" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":17,"skipped":277,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:07:59.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-89549b4a-12b1-4b01-842a-3eef59c02342 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-89549b4a-12b1-4b01-842a-3eef59c02342 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:08:06.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7612" for this suite. • [SLOW TEST:6.183 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":280,"failed":0} [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:08:06.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-45630813-64c1-4fa9-bc67-87428defe3fa STEP: Creating a pod to test consume secrets Nov 16 09:08:06.274: INFO: Waiting up to 5m0s for pod "pod-secrets-801d953d-f71b-4dd9-a644-34011e07fe3e" in namespace "secrets-1276" to be "Succeeded or Failed" Nov 16 09:08:06.281: INFO: Pod "pod-secrets-801d953d-f71b-4dd9-a644-34011e07fe3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148533ms Nov 16 09:08:08.373: INFO: Pod "pod-secrets-801d953d-f71b-4dd9-a644-34011e07fe3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098411707s Nov 16 09:08:10.377: INFO: Pod "pod-secrets-801d953d-f71b-4dd9-a644-34011e07fe3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102641347s STEP: Saw pod success Nov 16 09:08:10.377: INFO: Pod "pod-secrets-801d953d-f71b-4dd9-a644-34011e07fe3e" satisfied condition "Succeeded or Failed" Nov 16 09:08:10.383: INFO: Trying to get logs from node latest-worker pod pod-secrets-801d953d-f71b-4dd9-a644-34011e07fe3e container secret-volume-test: STEP: delete the pod Nov 16 09:08:10.395: INFO: Waiting for pod pod-secrets-801d953d-f71b-4dd9-a644-34011e07fe3e to disappear Nov 16 09:08:10.400: INFO: Pod pod-secrets-801d953d-f71b-4dd9-a644-34011e07fe3e no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:08:10.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1276" for this suite. STEP: Destroying namespace "secret-namespace-3226" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":19,"skipped":280,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:08:10.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:08:11.568: INFO: Checking APIGroup: apiregistration.k8s.io Nov 16 09:08:11.569: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Nov 16 09:08:11.569: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.569: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Nov 16 09:08:11.569: INFO: Checking APIGroup: extensions Nov 16 09:08:11.570: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Nov 16 09:08:11.570: INFO: Versions found [{extensions/v1beta1 v1beta1}] Nov 16 09:08:11.570: INFO: extensions/v1beta1 matches extensions/v1beta1 Nov 16 09:08:11.570: INFO: Checking APIGroup: apps Nov 16 09:08:11.571: INFO: PreferredVersion.GroupVersion: apps/v1 Nov 16 09:08:11.571: INFO: Versions found [{apps/v1 v1}] Nov 16 09:08:11.571: INFO: apps/v1 matches apps/v1 Nov 16 09:08:11.571: INFO: Checking APIGroup: events.k8s.io Nov 16 09:08:11.572: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Nov 16 09:08:11.572: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.572: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Nov 16 09:08:11.572: INFO: Checking APIGroup: authentication.k8s.io Nov 16 09:08:11.573: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Nov 16 09:08:11.573: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.573: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Nov 16 09:08:11.573: INFO: Checking APIGroup: authorization.k8s.io Nov 16 09:08:11.574: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Nov 16 09:08:11.574: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.574: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Nov 16 09:08:11.574: INFO: Checking APIGroup: autoscaling Nov 16 09:08:11.575: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Nov 16 09:08:11.575: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Nov 16 09:08:11.575: INFO: autoscaling/v1 matches autoscaling/v1 Nov 16 09:08:11.575: INFO: Checking APIGroup: batch Nov 16 09:08:11.576: INFO: PreferredVersion.GroupVersion: batch/v1 Nov 16 09:08:11.576: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Nov 16 09:08:11.576: INFO: batch/v1 matches batch/v1 Nov 16 09:08:11.576: INFO: Checking APIGroup: certificates.k8s.io Nov 16 09:08:11.577: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Nov 16 09:08:11.577: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.577: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Nov 16 09:08:11.577: INFO: Checking APIGroup: networking.k8s.io Nov 16 09:08:11.578: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Nov 16 09:08:11.578: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.578: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Nov 16 09:08:11.578: INFO: Checking APIGroup: policy Nov 16 09:08:11.579: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Nov 16 09:08:11.579: INFO: Versions found [{policy/v1beta1 v1beta1}] Nov 16 09:08:11.579: INFO: policy/v1beta1 matches policy/v1beta1 Nov 16 09:08:11.579: INFO: Checking APIGroup: rbac.authorization.k8s.io Nov 16 09:08:11.580: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Nov 16 09:08:11.580: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.580: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Nov 16 09:08:11.580: INFO: Checking APIGroup: storage.k8s.io Nov 16 09:08:11.581: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Nov 16 09:08:11.581: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.581: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Nov 16 09:08:11.581: INFO: Checking APIGroup: admissionregistration.k8s.io Nov 16 09:08:11.581: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Nov 16 09:08:11.581: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.581: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Nov 16 09:08:11.581: INFO: Checking APIGroup: apiextensions.k8s.io Nov 16 09:08:11.582: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Nov 16 09:08:11.582: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.582: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Nov 16 09:08:11.582: INFO: Checking APIGroup: scheduling.k8s.io Nov 16 09:08:11.583: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Nov 16 09:08:11.583: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.583: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Nov 16 09:08:11.583: INFO: Checking APIGroup: coordination.k8s.io Nov 16 09:08:11.584: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Nov 16 09:08:11.584: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.584: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Nov 16 09:08:11.584: INFO: Checking APIGroup: node.k8s.io Nov 16 09:08:11.585: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Nov 16 09:08:11.585: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.585: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Nov 16 09:08:11.585: INFO: Checking APIGroup: discovery.k8s.io Nov 16 09:08:11.586: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Nov 16 09:08:11.586: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Nov 16 09:08:11.586: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:08:11.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-6948" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":20,"skipped":297,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:08:11.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4656 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4656 STEP: Creating statefulset with conflicting port in namespace statefulset-4656 STEP: Waiting until pod test-pod will start running in namespace statefulset-4656 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4656 Nov 16 09:08:19.790: INFO: Observed stateful pod in namespace: statefulset-4656, name: ss-0, uid: 2de59d09-d56e-4c83-adc8-683b1b8f62fb, status phase: Pending. Waiting for statefulset controller to delete. Nov 16 09:08:19.913: INFO: Observed stateful pod in namespace: statefulset-4656, name: ss-0, uid: 2de59d09-d56e-4c83-adc8-683b1b8f62fb, status phase: Failed. Waiting for statefulset controller to delete. Nov 16 09:08:19.933: INFO: Observed stateful pod in namespace: statefulset-4656, name: ss-0, uid: 2de59d09-d56e-4c83-adc8-683b1b8f62fb, status phase: Failed. Waiting for statefulset controller to delete. Nov 16 09:08:20.062: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4656 STEP: Removing pod with conflicting port in namespace statefulset-4656 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4656 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 16 09:08:26.228: INFO: Deleting all statefulset in ns statefulset-4656 Nov 16 09:08:26.231: INFO: Scaling statefulset ss to 0 Nov 16 09:08:36.262: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 09:08:36.265: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:08:36.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4656" for this suite. • [SLOW TEST:24.696 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":21,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:08:36.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-6be74def-72b2-4670-b165-a935bcb08ede STEP: Creating a pod to test consume configMaps Nov 16 09:08:36.398: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f9232d73-0e5d-4377-a0e9-fefb717c55c3" in namespace "projected-4692" to be "Succeeded or Failed" Nov 16 09:08:36.419: INFO: Pod "pod-projected-configmaps-f9232d73-0e5d-4377-a0e9-fefb717c55c3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.14788ms Nov 16 09:08:38.423: INFO: Pod "pod-projected-configmaps-f9232d73-0e5d-4377-a0e9-fefb717c55c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025717205s Nov 16 09:08:40.427: INFO: Pod "pod-projected-configmaps-f9232d73-0e5d-4377-a0e9-fefb717c55c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029751469s STEP: Saw pod success Nov 16 09:08:40.427: INFO: Pod "pod-projected-configmaps-f9232d73-0e5d-4377-a0e9-fefb717c55c3" satisfied condition "Succeeded or Failed" Nov 16 09:08:40.430: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f9232d73-0e5d-4377-a0e9-fefb717c55c3 container projected-configmap-volume-test: STEP: delete the pod Nov 16 09:08:40.495: INFO: Waiting for pod pod-projected-configmaps-f9232d73-0e5d-4377-a0e9-fefb717c55c3 to disappear Nov 16 09:08:40.503: INFO: Pod pod-projected-configmaps-f9232d73-0e5d-4377-a0e9-fefb717c55c3 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:08:40.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4692" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":22,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:08:40.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:08:47.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4817" for this suite. • [SLOW TEST:7.109 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":23,"skipped":353,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:08:47.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Nov 16 09:08:47.726: INFO: Waiting up to 5m0s for pod "var-expansion-74854077-ff70-4ac6-bc6f-f6cc36eb6c4f" in namespace "var-expansion-6993" to be "Succeeded or Failed" Nov 16 09:08:47.731: INFO: Pod "var-expansion-74854077-ff70-4ac6-bc6f-f6cc36eb6c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.199331ms Nov 16 09:08:49.735: INFO: Pod "var-expansion-74854077-ff70-4ac6-bc6f-f6cc36eb6c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008972317s Nov 16 09:08:51.739: INFO: Pod "var-expansion-74854077-ff70-4ac6-bc6f-f6cc36eb6c4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012984471s STEP: Saw pod success Nov 16 09:08:51.739: INFO: Pod "var-expansion-74854077-ff70-4ac6-bc6f-f6cc36eb6c4f" satisfied condition "Succeeded or Failed" Nov 16 09:08:51.741: INFO: Trying to get logs from node latest-worker pod var-expansion-74854077-ff70-4ac6-bc6f-f6cc36eb6c4f container dapi-container: STEP: delete the pod Nov 16 09:08:51.756: INFO: Waiting for pod var-expansion-74854077-ff70-4ac6-bc6f-f6cc36eb6c4f to disappear Nov 16 09:08:51.773: INFO: Pod var-expansion-74854077-ff70-4ac6-bc6f-f6cc36eb6c4f no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:08:51.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6993" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":24,"skipped":370,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:08:51.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-458ac90a-2ad2-44dd-b0eb-b29f3f2730b4 STEP: Creating a pod to test consume secrets Nov 16 09:08:51.895: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c6cb44b7-715a-4c87-97ef-a6cc0653132c" in namespace "projected-8387" to be "Succeeded or Failed" Nov 16 09:08:51.898: INFO: Pod "pod-projected-secrets-c6cb44b7-715a-4c87-97ef-a6cc0653132c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.179702ms Nov 16 09:08:53.902: INFO: Pod "pod-projected-secrets-c6cb44b7-715a-4c87-97ef-a6cc0653132c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00706091s Nov 16 09:08:55.907: INFO: Pod "pod-projected-secrets-c6cb44b7-715a-4c87-97ef-a6cc0653132c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011798246s STEP: Saw pod success Nov 16 09:08:55.907: INFO: Pod "pod-projected-secrets-c6cb44b7-715a-4c87-97ef-a6cc0653132c" satisfied condition "Succeeded or Failed" Nov 16 09:08:55.910: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-c6cb44b7-715a-4c87-97ef-a6cc0653132c container projected-secret-volume-test: STEP: delete the pod Nov 16 09:08:55.953: INFO: Waiting for pod pod-projected-secrets-c6cb44b7-715a-4c87-97ef-a6cc0653132c to disappear Nov 16 09:08:55.996: INFO: Pod pod-projected-secrets-c6cb44b7-715a-4c87-97ef-a6cc0653132c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:08:55.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8387" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":25,"skipped":392,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:08:56.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:08:56.152: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Nov 16 09:08:59.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2999 create -f -' Nov 16 09:09:02.202: INFO: stderr: "" Nov 16 09:09:02.203: INFO: stdout: "e2e-test-crd-publish-openapi-6006-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Nov 16 09:09:02.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2999 delete e2e-test-crd-publish-openapi-6006-crds test-foo' Nov 16 09:09:02.318: INFO: stderr: "" Nov 16 09:09:02.318: INFO: stdout: "e2e-test-crd-publish-openapi-6006-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Nov 16 09:09:02.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2999 apply -f -' Nov 16 09:09:02.629: INFO: stderr: "" Nov 16 09:09:02.629: INFO: stdout: "e2e-test-crd-publish-openapi-6006-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Nov 16 09:09:02.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2999 delete e2e-test-crd-publish-openapi-6006-crds test-foo' Nov 16 09:09:02.821: INFO: stderr: "" Nov 16 09:09:02.821: INFO: stdout: "e2e-test-crd-publish-openapi-6006-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Nov 16 09:09:02.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2999 create -f -' Nov 16 09:09:03.107: INFO: rc: 1 Nov 16 09:09:03.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2999 apply -f -' Nov 16 09:09:03.728: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Nov 16 09:09:03.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2999 create -f -' Nov 16 09:09:04.538: INFO: rc: 1 Nov 16 09:09:04.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2999 apply -f -' Nov 16 09:09:04.864: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Nov 16 09:09:04.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6006-crds' Nov 16 09:09:05.166: INFO: stderr: "" Nov 16 09:09:05.167: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6006-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Nov 16 09:09:05.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6006-crds.metadata' Nov 16 09:09:05.482: INFO: stderr: "" Nov 16 09:09:05.482: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6006-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Nov 16 09:09:05.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6006-crds.spec' Nov 16 09:09:05.760: INFO: stderr: "" Nov 16 09:09:05.760: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6006-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Nov 16 09:09:05.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6006-crds.spec.bars' Nov 16 09:09:06.034: INFO: stderr: "" Nov 16 09:09:06.034: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6006-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Nov 16 09:09:06.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6006-crds.spec.bars2' Nov 16 09:09:06.329: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:09:09.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2999" for this suite. • [SLOW TEST:13.404 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":26,"skipped":414,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:09:09.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:09:09.911: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:09:11.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114549, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114549, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114550, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114549, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:09:15.080: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:09:15.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-268" for this suite. STEP: Destroying namespace "webhook-268-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.311 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":27,"skipped":415,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:09:15.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 16 09:09:19.891: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:09:19.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6661" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":28,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:09:19.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:09:19.972: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f63ff60b-8332-49fd-9cd0-36a371976f89" in namespace "downward-api-3774" to be "Succeeded or Failed" Nov 16 09:09:20.014: INFO: Pod "downwardapi-volume-f63ff60b-8332-49fd-9cd0-36a371976f89": Phase="Pending", Reason="", readiness=false. Elapsed: 42.099061ms Nov 16 09:09:22.019: INFO: Pod "downwardapi-volume-f63ff60b-8332-49fd-9cd0-36a371976f89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047035514s Nov 16 09:09:24.023: INFO: Pod "downwardapi-volume-f63ff60b-8332-49fd-9cd0-36a371976f89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051148453s STEP: Saw pod success Nov 16 09:09:24.023: INFO: Pod "downwardapi-volume-f63ff60b-8332-49fd-9cd0-36a371976f89" satisfied condition "Succeeded or Failed" Nov 16 09:09:24.027: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f63ff60b-8332-49fd-9cd0-36a371976f89 container client-container: STEP: delete the pod Nov 16 09:09:24.062: INFO: Waiting for pod downwardapi-volume-f63ff60b-8332-49fd-9cd0-36a371976f89 to disappear Nov 16 09:09:24.069: INFO: Pod downwardapi-volume-f63ff60b-8332-49fd-9cd0-36a371976f89 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:09:24.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3774" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":29,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:09:24.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Nov 16 09:09:24.187: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:09:40.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5436" for this suite. • [SLOW TEST:16.516 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":30,"skipped":465,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:09:40.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-3f3967f6-ca62-4684-b346-d9034a5b9030 STEP: Creating a pod to test consume secrets Nov 16 09:09:40.657: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b669595-ac12-407a-b998-975413566686" in namespace "projected-5514" to be "Succeeded or Failed" Nov 16 09:09:40.673: INFO: Pod "pod-projected-secrets-9b669595-ac12-407a-b998-975413566686": Phase="Pending", Reason="", readiness=false. Elapsed: 15.676691ms Nov 16 09:09:42.731: INFO: Pod "pod-projected-secrets-9b669595-ac12-407a-b998-975413566686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074067601s Nov 16 09:09:44.735: INFO: Pod "pod-projected-secrets-9b669595-ac12-407a-b998-975413566686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078406696s STEP: Saw pod success Nov 16 09:09:44.735: INFO: Pod "pod-projected-secrets-9b669595-ac12-407a-b998-975413566686" satisfied condition "Succeeded or Failed" Nov 16 09:09:44.738: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-9b669595-ac12-407a-b998-975413566686 container projected-secret-volume-test: STEP: delete the pod Nov 16 09:09:44.895: INFO: Waiting for pod pod-projected-secrets-9b669595-ac12-407a-b998-975413566686 to disappear Nov 16 09:09:44.901: INFO: Pod pod-projected-secrets-9b669595-ac12-407a-b998-975413566686 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:09:44.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5514" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":31,"skipped":482,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:09:44.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-921364b6-c519-49f2-b807-c2efedc7f146 STEP: Creating a pod to test consume configMaps Nov 16 09:09:45.028: INFO: Waiting up to 5m0s for pod "pod-configmaps-0809f00a-bff5-430c-90fb-b042bd627c52" in namespace "configmap-971" to be "Succeeded or Failed" Nov 16 09:09:45.040: INFO: Pod "pod-configmaps-0809f00a-bff5-430c-90fb-b042bd627c52": Phase="Pending", Reason="", readiness=false. Elapsed: 11.521538ms Nov 16 09:09:47.044: INFO: Pod "pod-configmaps-0809f00a-bff5-430c-90fb-b042bd627c52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015941294s Nov 16 09:09:49.047: INFO: Pod "pod-configmaps-0809f00a-bff5-430c-90fb-b042bd627c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018940539s STEP: Saw pod success Nov 16 09:09:49.048: INFO: Pod "pod-configmaps-0809f00a-bff5-430c-90fb-b042bd627c52" satisfied condition "Succeeded or Failed" Nov 16 09:09:49.050: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0809f00a-bff5-430c-90fb-b042bd627c52 container configmap-volume-test: STEP: delete the pod Nov 16 09:09:49.094: INFO: Waiting for pod pod-configmaps-0809f00a-bff5-430c-90fb-b042bd627c52 to disappear Nov 16 09:09:49.100: INFO: Pod pod-configmaps-0809f00a-bff5-430c-90fb-b042bd627c52 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:09:49.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-971" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":32,"skipped":487,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:09:49.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Nov 16 09:09:49.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3963' Nov 16 09:09:49.302: INFO: stderr: "" Nov 16 09:09:49.302: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Nov 16 09:09:49.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3963' Nov 16 09:09:52.511: INFO: stderr: "" Nov 16 09:09:52.511: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:09:52.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3963" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":33,"skipped":490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:09:52.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2804 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2804 STEP: creating replication controller externalsvc in namespace services-2804 I1116 09:09:52.833827 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2804, replica count: 2 I1116 09:09:55.884156 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:09:58.884424 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Nov 16 09:09:58.957: INFO: Creating new exec pod Nov 16 09:10:02.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-2804 execpod24sfb -- /bin/sh -x -c nslookup nodeport-service.services-2804.svc.cluster.local' Nov 16 09:10:03.298: INFO: stderr: "I1116 09:10:03.135160 299 log.go:181] (0xc000765080) (0xc0006d25a0) Create stream\nI1116 09:10:03.135218 299 log.go:181] (0xc000765080) (0xc0006d25a0) Stream added, broadcasting: 1\nI1116 09:10:03.141218 299 log.go:181] (0xc000765080) Reply frame received for 1\nI1116 09:10:03.141265 299 log.go:181] (0xc000765080) (0xc00075c000) Create stream\nI1116 09:10:03.141282 299 log.go:181] (0xc000765080) (0xc00075c000) Stream added, broadcasting: 3\nI1116 09:10:03.142349 299 log.go:181] (0xc000765080) Reply frame received for 3\nI1116 09:10:03.142416 299 log.go:181] (0xc000765080) (0xc0006d2000) Create stream\nI1116 09:10:03.142438 299 log.go:181] (0xc000765080) (0xc0006d2000) Stream added, broadcasting: 5\nI1116 09:10:03.143172 299 log.go:181] (0xc000765080) Reply frame received for 5\nI1116 09:10:03.262202 299 log.go:181] (0xc000765080) Data frame received for 5\nI1116 09:10:03.262228 299 log.go:181] (0xc0006d2000) (5) Data frame handling\nI1116 09:10:03.262245 299 log.go:181] (0xc0006d2000) (5) Data frame sent\n+ nslookup nodeport-service.services-2804.svc.cluster.local\nI1116 09:10:03.287822 299 log.go:181] (0xc000765080) Data frame received for 3\nI1116 09:10:03.287852 299 log.go:181] (0xc00075c000) (3) Data frame handling\nI1116 09:10:03.287872 299 log.go:181] (0xc00075c000) (3) Data frame sent\nI1116 09:10:03.289114 299 log.go:181] (0xc000765080) Data frame received for 3\nI1116 09:10:03.289192 299 log.go:181] (0xc00075c000) (3) Data frame handling\nI1116 09:10:03.289223 299 log.go:181] (0xc00075c000) (3) Data frame sent\nI1116 09:10:03.289244 299 log.go:181] (0xc000765080) Data frame received for 3\nI1116 09:10:03.289256 299 log.go:181] (0xc00075c000) (3) Data frame handling\nI1116 09:10:03.289322 299 log.go:181] (0xc000765080) Data frame received for 5\nI1116 09:10:03.289346 299 log.go:181] (0xc0006d2000) (5) Data frame handling\nI1116 09:10:03.291212 299 log.go:181] (0xc000765080) Data frame received for 1\nI1116 09:10:03.291226 299 log.go:181] (0xc0006d25a0) (1) Data frame handling\nI1116 09:10:03.291233 299 log.go:181] (0xc0006d25a0) (1) Data frame sent\nI1116 09:10:03.291240 299 log.go:181] (0xc000765080) (0xc0006d25a0) Stream removed, broadcasting: 1\nI1116 09:10:03.291278 299 log.go:181] (0xc000765080) Go away received\nI1116 09:10:03.291516 299 log.go:181] (0xc000765080) (0xc0006d25a0) Stream removed, broadcasting: 1\nI1116 09:10:03.291536 299 log.go:181] (0xc000765080) (0xc00075c000) Stream removed, broadcasting: 3\nI1116 09:10:03.291560 299 log.go:181] (0xc000765080) (0xc0006d2000) Stream removed, broadcasting: 5\n" Nov 16 09:10:03.298: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2804.svc.cluster.local\tcanonical name = externalsvc.services-2804.svc.cluster.local.\nName:\texternalsvc.services-2804.svc.cluster.local\nAddress: 10.102.153.113\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2804, will wait for the garbage collector to delete the pods Nov 16 09:10:03.360: INFO: Deleting ReplicationController externalsvc took: 7.152725ms Nov 16 09:10:03.760: INFO: Terminating ReplicationController externalsvc pods took: 400.224843ms Nov 16 09:10:15.794: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:10:15.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2804" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.257 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":34,"skipped":519,"failed":0} S ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:10:15.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:10:16.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":35,"skipped":520,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:10:16.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 16 09:10:16.212: INFO: Waiting up to 5m0s for pod "pod-41ffab32-da1e-4293-90fc-7a7e12102a02" in namespace "emptydir-6936" to be "Succeeded or Failed" Nov 16 09:10:16.222: INFO: Pod "pod-41ffab32-da1e-4293-90fc-7a7e12102a02": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196246ms Nov 16 09:10:18.227: INFO: Pod "pod-41ffab32-da1e-4293-90fc-7a7e12102a02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015266671s Nov 16 09:10:20.369: INFO: Pod "pod-41ffab32-da1e-4293-90fc-7a7e12102a02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157370388s STEP: Saw pod success Nov 16 09:10:20.369: INFO: Pod "pod-41ffab32-da1e-4293-90fc-7a7e12102a02" satisfied condition "Succeeded or Failed" Nov 16 09:10:20.377: INFO: Trying to get logs from node latest-worker pod pod-41ffab32-da1e-4293-90fc-7a7e12102a02 container test-container: STEP: delete the pod Nov 16 09:10:21.036: INFO: Waiting for pod pod-41ffab32-da1e-4293-90fc-7a7e12102a02 to disappear Nov 16 09:10:21.048: INFO: Pod pod-41ffab32-da1e-4293-90fc-7a7e12102a02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:10:21.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6936" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":36,"skipped":527,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:10:21.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:10:22.262: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:10:24.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114622, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114622, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114622, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114622, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:10:27.301: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:10:27.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3430-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:10:28.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2005" for this suite. STEP: Destroying namespace "webhook-2005-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.899 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":37,"skipped":529,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:10:28.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:10:29.031: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:10:33.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2538" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":38,"skipped":543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:10:33.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:10:49.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8013" for this suite. • [SLOW TEST:16.147 seconds] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":39,"skipped":583,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:10:49.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 16 09:10:49.342: INFO: Waiting up to 5m0s for pod "pod-551b45d4-27c1-4eff-af5c-1f79c061a6c5" in namespace "emptydir-1459" to be "Succeeded or Failed" Nov 16 09:10:49.354: INFO: Pod "pod-551b45d4-27c1-4eff-af5c-1f79c061a6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024135ms Nov 16 09:10:51.359: INFO: Pod "pod-551b45d4-27c1-4eff-af5c-1f79c061a6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016730151s Nov 16 09:10:53.363: INFO: Pod "pod-551b45d4-27c1-4eff-af5c-1f79c061a6c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020814476s STEP: Saw pod success Nov 16 09:10:53.363: INFO: Pod "pod-551b45d4-27c1-4eff-af5c-1f79c061a6c5" satisfied condition "Succeeded or Failed" Nov 16 09:10:53.365: INFO: Trying to get logs from node latest-worker pod pod-551b45d4-27c1-4eff-af5c-1f79c061a6c5 container test-container: STEP: delete the pod Nov 16 09:10:53.398: INFO: Waiting for pod pod-551b45d4-27c1-4eff-af5c-1f79c061a6c5 to disappear Nov 16 09:10:53.441: INFO: Pod pod-551b45d4-27c1-4eff-af5c-1f79c061a6c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:10:53.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1459" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":602,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:10:53.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:10:53.994: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:10:56.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114654, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114654, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114654, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114653, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:10:59.038: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:10:59.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3189" for this suite. STEP: Destroying namespace "webhook-3189-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.840 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":41,"skipped":606,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:10:59.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:10:59.341: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 16 09:11:02.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4889 create -f -' Nov 16 09:11:06.909: INFO: stderr: "" Nov 16 09:11:06.909: INFO: stdout: "e2e-test-crd-publish-openapi-8782-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Nov 16 09:11:06.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4889 delete e2e-test-crd-publish-openapi-8782-crds test-cr' Nov 16 09:11:07.051: INFO: stderr: "" Nov 16 09:11:07.051: INFO: stdout: "e2e-test-crd-publish-openapi-8782-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Nov 16 09:11:07.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4889 apply -f -' Nov 16 09:11:07.551: INFO: stderr: "" Nov 16 09:11:07.551: INFO: stdout: "e2e-test-crd-publish-openapi-8782-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Nov 16 09:11:07.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4889 delete e2e-test-crd-publish-openapi-8782-crds test-cr' Nov 16 09:11:11.779: INFO: stderr: "" Nov 16 09:11:11.779: INFO: stdout: "e2e-test-crd-publish-openapi-8782-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Nov 16 09:11:11.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8782-crds' Nov 16 09:11:12.361: INFO: stderr: "" Nov 16 09:11:12.362: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8782-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:11:15.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4889" for this suite. • [SLOW TEST:16.053 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":42,"skipped":623,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:11:15.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-264 STEP: creating service affinity-nodeport in namespace services-264 STEP: creating replication controller affinity-nodeport in namespace services-264 I1116 09:11:15.487143 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-264, replica count: 3 I1116 09:11:18.537505 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:11:21.537807 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:11:21.550: INFO: Creating new exec pod Nov 16 09:11:26.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-264 execpod-affinitywtv5n -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Nov 16 09:11:26.833: INFO: stderr: "I1116 09:11:26.713903 409 log.go:181] (0xc0006493f0) (0xc000640aa0) Create stream\nI1116 09:11:26.713968 409 log.go:181] (0xc0006493f0) (0xc000640aa0) Stream added, broadcasting: 1\nI1116 09:11:26.719881 409 log.go:181] (0xc0006493f0) Reply frame received for 1\nI1116 09:11:26.719945 409 log.go:181] (0xc0006493f0) (0xc000640b40) Create stream\nI1116 09:11:26.719977 409 log.go:181] (0xc0006493f0) (0xc000640b40) Stream added, broadcasting: 3\nI1116 09:11:26.721201 409 log.go:181] (0xc0006493f0) Reply frame received for 3\nI1116 09:11:26.721251 409 log.go:181] (0xc0006493f0) (0xc00072e280) Create stream\nI1116 09:11:26.721270 409 log.go:181] (0xc0006493f0) (0xc00072e280) Stream added, broadcasting: 5\nI1116 09:11:26.724270 409 log.go:181] (0xc0006493f0) Reply frame received for 5\nI1116 09:11:26.816487 409 log.go:181] (0xc0006493f0) Data frame received for 5\nI1116 09:11:26.816542 409 log.go:181] (0xc00072e280) (5) Data frame handling\nI1116 09:11:26.816575 409 log.go:181] (0xc00072e280) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI1116 09:11:26.823915 409 log.go:181] (0xc0006493f0) Data frame received for 5\nI1116 09:11:26.823954 409 log.go:181] (0xc00072e280) (5) Data frame handling\nI1116 09:11:26.823996 409 log.go:181] (0xc00072e280) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI1116 09:11:26.824097 409 log.go:181] (0xc0006493f0) Data frame received for 3\nI1116 09:11:26.824125 409 log.go:181] (0xc000640b40) (3) Data frame handling\nI1116 09:11:26.824305 409 log.go:181] (0xc0006493f0) Data frame received for 5\nI1116 09:11:26.824328 409 log.go:181] (0xc00072e280) (5) Data frame handling\nI1116 09:11:26.826194 409 log.go:181] (0xc0006493f0) Data frame received for 1\nI1116 09:11:26.826211 409 log.go:181] (0xc000640aa0) (1) Data frame handling\nI1116 09:11:26.826225 409 log.go:181] (0xc000640aa0) (1) Data frame sent\nI1116 09:11:26.826246 409 log.go:181] (0xc0006493f0) (0xc000640aa0) Stream removed, broadcasting: 1\nI1116 09:11:26.826417 409 log.go:181] (0xc0006493f0) Go away received\nI1116 09:11:26.826696 409 log.go:181] (0xc0006493f0) (0xc000640aa0) Stream removed, broadcasting: 1\nI1116 09:11:26.826723 409 log.go:181] (0xc0006493f0) (0xc000640b40) Stream removed, broadcasting: 3\nI1116 09:11:26.826737 409 log.go:181] (0xc0006493f0) (0xc00072e280) Stream removed, broadcasting: 5\n" Nov 16 09:11:26.833: INFO: stdout: "" Nov 16 09:11:26.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-264 execpod-affinitywtv5n -- /bin/sh -x -c nc -zv -t -w 2 10.99.191.65 80' Nov 16 09:11:27.061: INFO: stderr: "I1116 09:11:26.977075 427 log.go:181] (0xc00063b600) (0xc000574aa0) Create stream\nI1116 09:11:26.977124 427 log.go:181] (0xc00063b600) (0xc000574aa0) Stream added, broadcasting: 1\nI1116 09:11:26.981227 427 log.go:181] (0xc00063b600) Reply frame received for 1\nI1116 09:11:26.981266 427 log.go:181] (0xc00063b600) (0xc000574000) Create stream\nI1116 09:11:26.981279 427 log.go:181] (0xc00063b600) (0xc000574000) Stream added, broadcasting: 3\nI1116 09:11:26.982125 427 log.go:181] (0xc00063b600) Reply frame received for 3\nI1116 09:11:26.982154 427 log.go:181] (0xc00063b600) (0xc0004a28c0) Create stream\nI1116 09:11:26.982162 427 log.go:181] (0xc00063b600) (0xc0004a28c0) Stream added, broadcasting: 5\nI1116 09:11:26.982948 427 log.go:181] (0xc00063b600) Reply frame received for 5\nI1116 09:11:27.051323 427 log.go:181] (0xc00063b600) Data frame received for 3\nI1116 09:11:27.051386 427 log.go:181] (0xc000574000) (3) Data frame handling\nI1116 09:11:27.051421 427 log.go:181] (0xc00063b600) Data frame received for 5\nI1116 09:11:27.051441 427 log.go:181] (0xc0004a28c0) (5) Data frame handling\nI1116 09:11:27.051472 427 log.go:181] (0xc0004a28c0) (5) Data frame sent\nI1116 09:11:27.051497 427 log.go:181] (0xc00063b600) Data frame received for 5\n+ nc -zv -t -w 2 10.99.191.65 80\nConnection to 10.99.191.65 80 port [tcp/http] succeeded!\nI1116 09:11:27.051521 427 log.go:181] (0xc0004a28c0) (5) Data frame handling\nI1116 09:11:27.052723 427 log.go:181] (0xc00063b600) Data frame received for 1\nI1116 09:11:27.052970 427 log.go:181] (0xc000574aa0) (1) Data frame handling\nI1116 09:11:27.053023 427 log.go:181] (0xc000574aa0) (1) Data frame sent\nI1116 09:11:27.053051 427 log.go:181] (0xc00063b600) (0xc000574aa0) Stream removed, broadcasting: 1\nI1116 09:11:27.053094 427 log.go:181] (0xc00063b600) Go away received\nI1116 09:11:27.053603 427 log.go:181] (0xc00063b600) (0xc000574aa0) Stream removed, broadcasting: 1\nI1116 09:11:27.053628 427 log.go:181] (0xc00063b600) (0xc000574000) Stream removed, broadcasting: 3\nI1116 09:11:27.053640 427 log.go:181] (0xc00063b600) (0xc0004a28c0) Stream removed, broadcasting: 5\n" Nov 16 09:11:27.061: INFO: stdout: "" Nov 16 09:11:27.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-264 execpod-affinitywtv5n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 32261' Nov 16 09:11:27.268: INFO: stderr: "I1116 09:11:27.188417 445 log.go:181] (0xc0006554a0) (0xc0005b6aa0) Create stream\nI1116 09:11:27.188478 445 log.go:181] (0xc0006554a0) (0xc0005b6aa0) Stream added, broadcasting: 1\nI1116 09:11:27.193138 445 log.go:181] (0xc0006554a0) Reply frame received for 1\nI1116 09:11:27.193296 445 log.go:181] (0xc0006554a0) (0xc000d8e3c0) Create stream\nI1116 09:11:27.193376 445 log.go:181] (0xc0006554a0) (0xc000d8e3c0) Stream added, broadcasting: 3\nI1116 09:11:27.195347 445 log.go:181] (0xc0006554a0) Reply frame received for 3\nI1116 09:11:27.195381 445 log.go:181] (0xc0006554a0) (0xc0005b6000) Create stream\nI1116 09:11:27.195389 445 log.go:181] (0xc0006554a0) (0xc0005b6000) Stream added, broadcasting: 5\nI1116 09:11:27.196284 445 log.go:181] (0xc0006554a0) Reply frame received for 5\nI1116 09:11:27.258707 445 log.go:181] (0xc0006554a0) Data frame received for 5\nI1116 09:11:27.258744 445 log.go:181] (0xc0005b6000) (5) Data frame handling\nI1116 09:11:27.258773 445 log.go:181] (0xc0005b6000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 32261\nConnection to 172.18.0.15 32261 port [tcp/32261] succeeded!\nI1116 09:11:27.258788 445 log.go:181] (0xc0006554a0) Data frame received for 5\nI1116 09:11:27.258837 445 log.go:181] (0xc0006554a0) Data frame received for 3\nI1116 09:11:27.258872 445 log.go:181] (0xc000d8e3c0) (3) Data frame handling\nI1116 09:11:27.258894 445 log.go:181] (0xc0005b6000) (5) Data frame handling\nI1116 09:11:27.260355 445 log.go:181] (0xc0006554a0) Data frame received for 1\nI1116 09:11:27.260377 445 log.go:181] (0xc0005b6aa0) (1) Data frame handling\nI1116 09:11:27.260400 445 log.go:181] (0xc0005b6aa0) (1) Data frame sent\nI1116 09:11:27.260411 445 log.go:181] (0xc0006554a0) (0xc0005b6aa0) Stream removed, broadcasting: 1\nI1116 09:11:27.260501 445 log.go:181] (0xc0006554a0) Go away received\nI1116 09:11:27.260799 445 log.go:181] (0xc0006554a0) (0xc0005b6aa0) Stream removed, broadcasting: 1\nI1116 09:11:27.260809 445 log.go:181] (0xc0006554a0) (0xc000d8e3c0) Stream removed, broadcasting: 3\nI1116 09:11:27.260813 445 log.go:181] (0xc0006554a0) (0xc0005b6000) Stream removed, broadcasting: 5\n" Nov 16 09:11:27.268: INFO: stdout: "" Nov 16 09:11:27.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-264 execpod-affinitywtv5n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32261' Nov 16 09:11:27.478: INFO: stderr: "I1116 09:11:27.411031 463 log.go:181] (0xc000846fd0) (0xc00083e6e0) Create stream\nI1116 09:11:27.411087 463 log.go:181] (0xc000846fd0) (0xc00083e6e0) Stream added, broadcasting: 1\nI1116 09:11:27.417086 463 log.go:181] (0xc000846fd0) Reply frame received for 1\nI1116 09:11:27.417161 463 log.go:181] (0xc000846fd0) (0xc00044e280) Create stream\nI1116 09:11:27.417189 463 log.go:181] (0xc000846fd0) (0xc00044e280) Stream added, broadcasting: 3\nI1116 09:11:27.418257 463 log.go:181] (0xc000846fd0) Reply frame received for 3\nI1116 09:11:27.418283 463 log.go:181] (0xc000846fd0) (0xc00083e000) Create stream\nI1116 09:11:27.418294 463 log.go:181] (0xc000846fd0) (0xc00083e000) Stream added, broadcasting: 5\nI1116 09:11:27.419319 463 log.go:181] (0xc000846fd0) Reply frame received for 5\nI1116 09:11:27.472579 463 log.go:181] (0xc000846fd0) Data frame received for 5\nI1116 09:11:27.472627 463 log.go:181] (0xc00083e000) (5) Data frame handling\nI1116 09:11:27.472641 463 log.go:181] (0xc00083e000) (5) Data frame sent\nI1116 09:11:27.472650 463 log.go:181] (0xc000846fd0) Data frame received for 5\nI1116 09:11:27.472654 463 log.go:181] (0xc00083e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32261\nConnection to 172.18.0.14 32261 port [tcp/32261] succeeded!\nI1116 09:11:27.472673 463 log.go:181] (0xc000846fd0) Data frame received for 3\nI1116 09:11:27.472680 463 log.go:181] (0xc00044e280) (3) Data frame handling\nI1116 09:11:27.474453 463 log.go:181] (0xc000846fd0) Data frame received for 1\nI1116 09:11:27.474484 463 log.go:181] (0xc00083e6e0) (1) Data frame handling\nI1116 09:11:27.474514 463 log.go:181] (0xc00083e6e0) (1) Data frame sent\nI1116 09:11:27.474544 463 log.go:181] (0xc000846fd0) (0xc00083e6e0) Stream removed, broadcasting: 1\nI1116 09:11:27.474561 463 log.go:181] (0xc000846fd0) Go away received\nI1116 09:11:27.474899 463 log.go:181] (0xc000846fd0) (0xc00083e6e0) Stream removed, broadcasting: 1\nI1116 09:11:27.474920 463 log.go:181] (0xc000846fd0) (0xc00044e280) Stream removed, broadcasting: 3\nI1116 09:11:27.474929 463 log.go:181] (0xc000846fd0) (0xc00083e000) Stream removed, broadcasting: 5\n" Nov 16 09:11:27.478: INFO: stdout: "" Nov 16 09:11:27.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-264 execpod-affinitywtv5n -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:32261/ ; done' Nov 16 09:11:27.803: INFO: stderr: "I1116 09:11:27.623190 482 log.go:181] (0xc000231760) (0xc00087eaa0) Create stream\nI1116 09:11:27.623239 482 log.go:181] (0xc000231760) (0xc00087eaa0) Stream added, broadcasting: 1\nI1116 09:11:27.625535 482 log.go:181] (0xc000231760) Reply frame received for 1\nI1116 09:11:27.625576 482 log.go:181] (0xc000231760) (0xc000c18140) Create stream\nI1116 09:11:27.625587 482 log.go:181] (0xc000231760) (0xc000c18140) Stream added, broadcasting: 3\nI1116 09:11:27.626301 482 log.go:181] (0xc000231760) Reply frame received for 3\nI1116 09:11:27.626334 482 log.go:181] (0xc000231760) (0xc000c181e0) Create stream\nI1116 09:11:27.626340 482 log.go:181] (0xc000231760) (0xc000c181e0) Stream added, broadcasting: 5\nI1116 09:11:27.627042 482 log.go:181] (0xc000231760) Reply frame received for 5\nI1116 09:11:27.695945 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.695975 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.695998 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ seq 0 15\nI1116 09:11:27.700580 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.700602 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.700618 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.700633 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.700728 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.700763 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.711389 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.711407 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.711419 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.711775 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.711799 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.711815 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.711842 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.711854 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.711860 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.715916 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.715951 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.715987 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.716562 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.716575 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.716582 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.716682 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.716703 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.716721 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.722578 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.722597 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.722613 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.723578 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.723600 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.723614 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.723825 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.723857 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.723881 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.729763 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.729779 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.729788 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.729832 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.729865 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.729893 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.729913 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.729931 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.729984 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.734647 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.734662 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.734670 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.735264 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.735285 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.735317 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.735335 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.735356 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.735373 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.739828 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.739860 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.739885 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.740313 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.740330 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.740339 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.740349 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.740373 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.740403 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.743915 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.743950 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.743969 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.744594 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.744614 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.744631 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.744646 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.744662 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.744674 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.751698 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.751730 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.751774 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.752372 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.752400 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.752425 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.752509 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.752525 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.752539 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.757139 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.757174 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.757203 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.757725 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.757778 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.757796 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.757815 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.757845 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.757871 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.761962 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.761982 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.761993 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.762420 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.762433 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.762441 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.762457 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.762480 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.762491 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.768193 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.768233 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.768260 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.768822 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.768898 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.768922 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.768958 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.768989 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.769013 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.775559 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.775584 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.775607 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.776060 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.776083 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.776096 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.776114 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.776140 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.776158 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.779166 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.779186 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.779195 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.779735 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.779759 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.779777 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.779807 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.779829 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.779841 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.783435 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.783553 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.783612 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.783889 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.783911 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.783926 482 log.go:181] (0xc000c181e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.783977 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.784006 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.784018 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.787388 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.787415 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.787431 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.787886 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.787902 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.787957 482 log.go:181] (0xc000c181e0) (5) Data frame sent\nI1116 09:11:27.787982 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.787994 482 log.go:181] (0xc000c18140) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32261/\nI1116 09:11:27.788011 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.792047 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.792077 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.792102 482 log.go:181] (0xc000c18140) (3) Data frame sent\nI1116 09:11:27.792627 482 log.go:181] (0xc000231760) Data frame received for 5\nI1116 09:11:27.792675 482 log.go:181] (0xc000c181e0) (5) Data frame handling\nI1116 09:11:27.793213 482 log.go:181] (0xc000231760) Data frame received for 3\nI1116 09:11:27.793261 482 log.go:181] (0xc000c18140) (3) Data frame handling\nI1116 09:11:27.795254 482 log.go:181] (0xc000231760) Data frame received for 1\nI1116 09:11:27.795288 482 log.go:181] (0xc00087eaa0) (1) Data frame handling\nI1116 09:11:27.795307 482 log.go:181] (0xc00087eaa0) (1) Data frame sent\nI1116 09:11:27.795337 482 log.go:181] (0xc000231760) (0xc00087eaa0) Stream removed, broadcasting: 1\nI1116 09:11:27.795373 482 log.go:181] (0xc000231760) Go away received\nI1116 09:11:27.795943 482 log.go:181] (0xc000231760) (0xc00087eaa0) Stream removed, broadcasting: 1\nI1116 09:11:27.795964 482 log.go:181] (0xc000231760) (0xc000c18140) Stream removed, broadcasting: 3\nI1116 09:11:27.795974 482 log.go:181] (0xc000231760) (0xc000c181e0) Stream removed, broadcasting: 5\n" Nov 16 09:11:27.804: INFO: stdout: "\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6\naffinity-nodeport-dxhz6" Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Received response from host: affinity-nodeport-dxhz6 Nov 16 09:11:27.804: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-264, will wait for the garbage collector to delete the pods Nov 16 09:11:27.946: INFO: Deleting ReplicationController affinity-nodeport took: 5.959067ms Nov 16 09:11:28.546: INFO: Terminating ReplicationController affinity-nodeport pods took: 600.281313ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:11:45.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-264" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:30.375 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":43,"skipped":631,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:11:45.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 16 09:11:45.801: INFO: Waiting up to 5m0s for pod "pod-70d0859d-a4e2-44b5-9f1e-49fb0d789c29" in namespace "emptydir-9100" to be "Succeeded or Failed" Nov 16 09:11:45.818: INFO: Pod "pod-70d0859d-a4e2-44b5-9f1e-49fb0d789c29": Phase="Pending", Reason="", readiness=false. Elapsed: 16.227064ms Nov 16 09:11:47.905: INFO: Pod "pod-70d0859d-a4e2-44b5-9f1e-49fb0d789c29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10406203s Nov 16 09:11:49.921: INFO: Pod "pod-70d0859d-a4e2-44b5-9f1e-49fb0d789c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119266361s STEP: Saw pod success Nov 16 09:11:49.921: INFO: Pod "pod-70d0859d-a4e2-44b5-9f1e-49fb0d789c29" satisfied condition "Succeeded or Failed" Nov 16 09:11:49.923: INFO: Trying to get logs from node latest-worker pod pod-70d0859d-a4e2-44b5-9f1e-49fb0d789c29 container test-container: STEP: delete the pod Nov 16 09:11:49.969: INFO: Waiting for pod pod-70d0859d-a4e2-44b5-9f1e-49fb0d789c29 to disappear Nov 16 09:11:49.973: INFO: Pod pod-70d0859d-a4e2-44b5-9f1e-49fb0d789c29 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:11:49.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9100" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:11:49.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4028 STEP: creating service affinity-clusterip in namespace services-4028 STEP: creating replication controller affinity-clusterip in namespace services-4028 I1116 09:11:51.556220 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4028, replica count: 3 I1116 09:11:54.606625 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:11:57.606874 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:11:57.613: INFO: Creating new exec pod Nov 16 09:12:02.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4028 execpod-affinitygphnb -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Nov 16 09:12:02.900: INFO: stderr: "I1116 09:12:02.802662 500 log.go:181] (0xc0008b1c30) (0xc000d0e8c0) Create stream\nI1116 09:12:02.802718 500 log.go:181] (0xc0008b1c30) (0xc000d0e8c0) Stream added, broadcasting: 1\nI1116 09:12:02.808243 500 log.go:181] (0xc0008b1c30) Reply frame received for 1\nI1116 09:12:02.808306 500 log.go:181] (0xc0008b1c30) (0xc0009f6000) Create stream\nI1116 09:12:02.808330 500 log.go:181] (0xc0008b1c30) (0xc0009f6000) Stream added, broadcasting: 3\nI1116 09:12:02.809182 500 log.go:181] (0xc0008b1c30) Reply frame received for 3\nI1116 09:12:02.809239 500 log.go:181] (0xc0008b1c30) (0xc000c4e0a0) Create stream\nI1116 09:12:02.809258 500 log.go:181] (0xc0008b1c30) (0xc000c4e0a0) Stream added, broadcasting: 5\nI1116 09:12:02.810070 500 log.go:181] (0xc0008b1c30) Reply frame received for 5\nI1116 09:12:02.891776 500 log.go:181] (0xc0008b1c30) Data frame received for 5\nI1116 09:12:02.891823 500 log.go:181] (0xc000c4e0a0) (5) Data frame handling\nI1116 09:12:02.891858 500 log.go:181] (0xc000c4e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI1116 09:12:02.892359 500 log.go:181] (0xc0008b1c30) Data frame received for 5\nI1116 09:12:02.892401 500 log.go:181] (0xc000c4e0a0) (5) Data frame handling\nI1116 09:12:02.892424 500 log.go:181] (0xc000c4e0a0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI1116 09:12:02.892797 500 log.go:181] (0xc0008b1c30) Data frame received for 3\nI1116 09:12:02.892825 500 log.go:181] (0xc0009f6000) (3) Data frame handling\nI1116 09:12:02.892950 500 log.go:181] (0xc0008b1c30) Data frame received for 5\nI1116 09:12:02.892984 500 log.go:181] (0xc000c4e0a0) (5) Data frame handling\nI1116 09:12:02.895031 500 log.go:181] (0xc0008b1c30) Data frame received for 1\nI1116 09:12:02.895061 500 log.go:181] (0xc000d0e8c0) (1) Data frame handling\nI1116 09:12:02.895082 500 log.go:181] (0xc000d0e8c0) (1) Data frame sent\nI1116 09:12:02.895106 500 log.go:181] (0xc0008b1c30) (0xc000d0e8c0) Stream removed, broadcasting: 1\nI1116 09:12:02.895124 500 log.go:181] (0xc0008b1c30) Go away received\nI1116 09:12:02.895650 500 log.go:181] (0xc0008b1c30) (0xc000d0e8c0) Stream removed, broadcasting: 1\nI1116 09:12:02.895678 500 log.go:181] (0xc0008b1c30) (0xc0009f6000) Stream removed, broadcasting: 3\nI1116 09:12:02.895691 500 log.go:181] (0xc0008b1c30) (0xc000c4e0a0) Stream removed, broadcasting: 5\n" Nov 16 09:12:02.901: INFO: stdout: "" Nov 16 09:12:02.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4028 execpod-affinitygphnb -- /bin/sh -x -c nc -zv -t -w 2 10.109.137.240 80' Nov 16 09:12:03.120: INFO: stderr: "I1116 09:12:03.037973 518 log.go:181] (0xc0002560b0) (0xc000e080a0) Create stream\nI1116 09:12:03.038052 518 log.go:181] (0xc0002560b0) (0xc000e080a0) Stream added, broadcasting: 1\nI1116 09:12:03.040305 518 log.go:181] (0xc0002560b0) Reply frame received for 1\nI1116 09:12:03.040352 518 log.go:181] (0xc0002560b0) (0xc000bce000) Create stream\nI1116 09:12:03.040372 518 log.go:181] (0xc0002560b0) (0xc000bce000) Stream added, broadcasting: 3\nI1116 09:12:03.041420 518 log.go:181] (0xc0002560b0) Reply frame received for 3\nI1116 09:12:03.041461 518 log.go:181] (0xc0002560b0) (0xc000c299a0) Create stream\nI1116 09:12:03.041474 518 log.go:181] (0xc0002560b0) (0xc000c299a0) Stream added, broadcasting: 5\nI1116 09:12:03.042437 518 log.go:181] (0xc0002560b0) Reply frame received for 5\nI1116 09:12:03.111143 518 log.go:181] (0xc0002560b0) Data frame received for 5\nI1116 09:12:03.111185 518 log.go:181] (0xc000c299a0) (5) Data frame handling\nI1116 09:12:03.111208 518 log.go:181] (0xc000c299a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.137.240 80\nConnection to 10.109.137.240 80 port [tcp/http] succeeded!\nI1116 09:12:03.111474 518 log.go:181] (0xc0002560b0) Data frame received for 5\nI1116 09:12:03.111520 518 log.go:181] (0xc000c299a0) (5) Data frame handling\nI1116 09:12:03.111962 518 log.go:181] (0xc0002560b0) Data frame received for 3\nI1116 09:12:03.111995 518 log.go:181] (0xc000bce000) (3) Data frame handling\nI1116 09:12:03.113924 518 log.go:181] (0xc0002560b0) Data frame received for 1\nI1116 09:12:03.113973 518 log.go:181] (0xc000e080a0) (1) Data frame handling\nI1116 09:12:03.113988 518 log.go:181] (0xc000e080a0) (1) Data frame sent\nI1116 09:12:03.114001 518 log.go:181] (0xc0002560b0) (0xc000e080a0) Stream removed, broadcasting: 1\nI1116 09:12:03.114394 518 log.go:181] (0xc0002560b0) (0xc000e080a0) Stream removed, broadcasting: 1\nI1116 09:12:03.114421 518 log.go:181] (0xc0002560b0) (0xc000bce000) Stream removed, broadcasting: 3\nI1116 09:12:03.114637 518 log.go:181] (0xc0002560b0) (0xc000c299a0) Stream removed, broadcasting: 5\n" Nov 16 09:12:03.120: INFO: stdout: "" Nov 16 09:12:03.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4028 execpod-affinitygphnb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.109.137.240:80/ ; done' Nov 16 09:12:03.447: INFO: stderr: "I1116 09:12:03.248825 537 log.go:181] (0xc000d9d290) (0xc000868780) Create stream\nI1116 09:12:03.248980 537 log.go:181] (0xc000d9d290) (0xc000868780) Stream added, broadcasting: 1\nI1116 09:12:03.253288 537 log.go:181] (0xc000d9d290) Reply frame received for 1\nI1116 09:12:03.253336 537 log.go:181] (0xc000d9d290) (0xc0005a4000) Create stream\nI1116 09:12:03.253355 537 log.go:181] (0xc000d9d290) (0xc0005a4000) Stream added, broadcasting: 3\nI1116 09:12:03.254227 537 log.go:181] (0xc000d9d290) Reply frame received for 3\nI1116 09:12:03.254264 537 log.go:181] (0xc000d9d290) (0xc0005a40a0) Create stream\nI1116 09:12:03.254277 537 log.go:181] (0xc000d9d290) (0xc0005a40a0) Stream added, broadcasting: 5\nI1116 09:12:03.255013 537 log.go:181] (0xc000d9d290) Reply frame received for 5\nI1116 09:12:03.331207 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.331249 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.331260 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.331278 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.331283 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.331295 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.336463 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.336479 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.336490 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.337619 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.337650 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.337662 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.337695 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.337738 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.337773 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.342686 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.342714 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.342740 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.343188 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.343210 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.343224 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.343352 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.343370 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.343389 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.349206 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.349223 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.349234 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.349796 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.349813 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.349823 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.349834 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.349842 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.349850 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.353376 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.353391 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.353407 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.353691 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.353703 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.353708 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -sI1116 09:12:03.353786 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.353798 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.353810 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.353822 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.353826 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.353831 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.359437 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.359459 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.359475 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.360349 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.360376 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.360409 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.360431 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.360443 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.360455 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.367246 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.367276 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.367294 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.367629 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.367657 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.367667 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.367681 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.367688 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.367701 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.372731 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.372760 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.372780 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.373307 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.373326 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.373339 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.373365 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.373372 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.373379 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.379407 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.379435 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.379448 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.380218 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.380240 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.380262 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.380297 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.380317 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.380333 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.386080 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.386104 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.386118 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.386597 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.386624 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.386634 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.386647 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.386654 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.386661 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.392469 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.392497 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.392526 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.393199 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.393230 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.393240 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.393255 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.393263 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.393270 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.398857 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.398885 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.398925 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.399601 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.399625 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.399640 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.399667 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.399719 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.399757 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.405847 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.405867 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.405884 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.406635 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.406651 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.406662 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.406851 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.406869 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.406885 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.413184 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.413210 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.413226 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.413708 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.413730 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.413754 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.413883 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.413909 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.413937 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.421874 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.421891 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.421901 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.422593 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.422614 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.422627 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.422649 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.422665 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.422693 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\nI1116 09:12:03.422709 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.422718 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.422739 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\nI1116 09:12:03.428252 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.428269 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.428280 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.429319 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.429373 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.429402 537 log.go:181] (0xc0005a40a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.109.137.240:80/\nI1116 09:12:03.429435 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.429459 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.429491 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.437137 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.437153 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.437165 537 log.go:181] (0xc0005a4000) (3) Data frame sent\nI1116 09:12:03.438112 537 log.go:181] (0xc000d9d290) Data frame received for 5\nI1116 09:12:03.438141 537 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1116 09:12:03.438175 537 log.go:181] (0xc000d9d290) Data frame received for 3\nI1116 09:12:03.438191 537 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1116 09:12:03.440258 537 log.go:181] (0xc000d9d290) Data frame received for 1\nI1116 09:12:03.440293 537 log.go:181] (0xc000868780) (1) Data frame handling\nI1116 09:12:03.440329 537 log.go:181] (0xc000868780) (1) Data frame sent\nI1116 09:12:03.440356 537 log.go:181] (0xc000d9d290) (0xc000868780) Stream removed, broadcasting: 1\nI1116 09:12:03.440387 537 log.go:181] (0xc000d9d290) Go away received\nI1116 09:12:03.440654 537 log.go:181] (0xc000d9d290) (0xc000868780) Stream removed, broadcasting: 1\nI1116 09:12:03.440667 537 log.go:181] (0xc000d9d290) (0xc0005a4000) Stream removed, broadcasting: 3\nI1116 09:12:03.440672 537 log.go:181] (0xc000d9d290) (0xc0005a40a0) Stream removed, broadcasting: 5\n" Nov 16 09:12:03.447: INFO: stdout: "\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw\naffinity-clusterip-w7bzw" Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Received response from host: affinity-clusterip-w7bzw Nov 16 09:12:03.447: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4028, will wait for the garbage collector to delete the pods Nov 16 09:12:03.546: INFO: Deleting ReplicationController affinity-clusterip took: 6.29008ms Nov 16 09:12:03.946: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.209669ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:12:15.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4028" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.853 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":45,"skipped":657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:12:15.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:12:15.883: INFO: Creating ReplicaSet my-hostname-basic-81752018-3232-45f3-819e-4d82c2e6a20a Nov 16 09:12:15.924: INFO: Pod name my-hostname-basic-81752018-3232-45f3-819e-4d82c2e6a20a: Found 0 pods out of 1 Nov 16 09:12:20.951: INFO: Pod name my-hostname-basic-81752018-3232-45f3-819e-4d82c2e6a20a: Found 1 pods out of 1 Nov 16 09:12:20.951: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-81752018-3232-45f3-819e-4d82c2e6a20a" is running Nov 16 09:12:20.962: INFO: Pod "my-hostname-basic-81752018-3232-45f3-819e-4d82c2e6a20a-w6ckj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-16 09:12:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-16 09:12:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-16 09:12:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-16 09:12:15 +0000 UTC Reason: Message:}]) Nov 16 09:12:20.963: INFO: Trying to dial the pod Nov 16 09:12:25.977: INFO: Controller my-hostname-basic-81752018-3232-45f3-819e-4d82c2e6a20a: Got expected result from replica 1 [my-hostname-basic-81752018-3232-45f3-819e-4d82c2e6a20a-w6ckj]: "my-hostname-basic-81752018-3232-45f3-819e-4d82c2e6a20a-w6ckj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:12:25.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9628" for this suite. • [SLOW TEST:10.151 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":46,"skipped":685,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:12:25.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:12:26.634: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:12:28.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114746, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114746, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114746, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114746, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:12:31.685: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Nov 16 09:12:31.710: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:12:31.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3249" for this suite. STEP: Destroying namespace "webhook-3249-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.861 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":47,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:12:31.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-e3ef7f02-5f3a-42a0-b314-c2d424eba07e STEP: Creating a pod to test consume configMaps Nov 16 09:12:32.031: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5b6c18d-e608-4300-86ce-bddf84cfdb67" in namespace "configmap-9780" to be "Succeeded or Failed" Nov 16 09:12:32.041: INFO: Pod "pod-configmaps-a5b6c18d-e608-4300-86ce-bddf84cfdb67": Phase="Pending", Reason="", readiness=false. Elapsed: 9.207959ms Nov 16 09:12:34.263: INFO: Pod "pod-configmaps-a5b6c18d-e608-4300-86ce-bddf84cfdb67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231428439s Nov 16 09:12:36.267: INFO: Pod "pod-configmaps-a5b6c18d-e608-4300-86ce-bddf84cfdb67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235739816s STEP: Saw pod success Nov 16 09:12:36.267: INFO: Pod "pod-configmaps-a5b6c18d-e608-4300-86ce-bddf84cfdb67" satisfied condition "Succeeded or Failed" Nov 16 09:12:36.270: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a5b6c18d-e608-4300-86ce-bddf84cfdb67 container configmap-volume-test: STEP: delete the pod Nov 16 09:12:36.298: INFO: Waiting for pod pod-configmaps-a5b6c18d-e608-4300-86ce-bddf84cfdb67 to disappear Nov 16 09:12:36.316: INFO: Pod pod-configmaps-a5b6c18d-e608-4300-86ce-bddf84cfdb67 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:12:36.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9780" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":754,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:12:36.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1116 09:12:37.925981 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 16 09:13:40.045: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:13:40.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9557" for this suite. • [SLOW TEST:63.431 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":49,"skipped":762,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:13:40.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:13:40.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4172" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":50,"skipped":807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:13:40.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:13:40.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5a3559b-5e90-4bc5-a8c6-86145461860b" in namespace "downward-api-474" to be "Succeeded or Failed" Nov 16 09:13:40.332: INFO: Pod "downwardapi-volume-d5a3559b-5e90-4bc5-a8c6-86145461860b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.138399ms Nov 16 09:13:42.337: INFO: Pod "downwardapi-volume-d5a3559b-5e90-4bc5-a8c6-86145461860b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035609773s Nov 16 09:13:44.341: INFO: Pod "downwardapi-volume-d5a3559b-5e90-4bc5-a8c6-86145461860b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03973962s STEP: Saw pod success Nov 16 09:13:44.341: INFO: Pod "downwardapi-volume-d5a3559b-5e90-4bc5-a8c6-86145461860b" satisfied condition "Succeeded or Failed" Nov 16 09:13:44.343: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d5a3559b-5e90-4bc5-a8c6-86145461860b container client-container: STEP: delete the pod Nov 16 09:13:44.463: INFO: Waiting for pod downwardapi-volume-d5a3559b-5e90-4bc5-a8c6-86145461860b to disappear Nov 16 09:13:44.623: INFO: Pod downwardapi-volume-d5a3559b-5e90-4bc5-a8c6-86145461860b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:13:44.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-474" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":51,"skipped":847,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:13:44.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:13:45.794: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:13:48.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114825, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114825, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114826, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114825, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 09:13:50.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114825, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114825, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114826, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114825, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:13:53.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:13:53.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5829-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:13:55.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-998" for this suite. STEP: Destroying namespace "webhook-998-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.568 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":52,"skipped":877,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:13:55.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Nov 16 09:13:55.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-a 518018a8-5c78-42f5-b769-7d19b3fec035 9771912 0 2020-11-16 09:13:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-16 09:13:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 09:13:55.300: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-a 518018a8-5c78-42f5-b769-7d19b3fec035 9771912 0 2020-11-16 09:13:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-16 09:13:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Nov 16 09:14:05.317: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-a 518018a8-5c78-42f5-b769-7d19b3fec035 9771957 0 2020-11-16 09:13:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 09:14:05.317: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-a 518018a8-5c78-42f5-b769-7d19b3fec035 9771957 0 2020-11-16 09:13:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Nov 16 09:14:15.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-a 518018a8-5c78-42f5-b769-7d19b3fec035 9771987 0 2020-11-16 09:13:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 09:14:15.325: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-a 518018a8-5c78-42f5-b769-7d19b3fec035 9771987 0 2020-11-16 09:13:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Nov 16 09:14:25.331: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-a 518018a8-5c78-42f5-b769-7d19b3fec035 9772018 0 2020-11-16 09:13:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 09:14:25.332: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-a 518018a8-5c78-42f5-b769-7d19b3fec035 9772018 0 2020-11-16 09:13:55 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Nov 16 09:14:35.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-b 5e109d13-6c4c-4587-b70d-cfafdac9d87f 9772048 0 2020-11-16 09:14:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 09:14:35.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-b 5e109d13-6c4c-4587-b70d-cfafdac9d87f 9772048 0 2020-11-16 09:14:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Nov 16 09:14:45.348: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-b 5e109d13-6c4c-4587-b70d-cfafdac9d87f 9772078 0 2020-11-16 09:14:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 09:14:45.348: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8493 /api/v1/namespaces/watch-8493/configmaps/e2e-watch-test-configmap-b 5e109d13-6c4c-4587-b70d-cfafdac9d87f 9772078 0 2020-11-16 09:14:35 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-11-16 09:14:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:14:55.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8493" for this suite. • [SLOW TEST:60.106 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":53,"skipped":888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:14:55.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Nov 16 09:14:55.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f -' Nov 16 09:14:55.784: INFO: stderr: "" Nov 16 09:14:55.784: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Nov 16 09:14:55.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config diff -f -' Nov 16 09:14:56.274: INFO: rc: 1 Nov 16 09:14:56.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete -f -' Nov 16 09:14:56.385: INFO: stderr: "" Nov 16 09:14:56.385: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:14:56.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":54,"skipped":917,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:14:56.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 16 09:14:56.459: INFO: Waiting up to 5m0s for pod "pod-86b63409-57d6-4898-9fb1-bc1f13566b00" in namespace "emptydir-7363" to be "Succeeded or Failed" Nov 16 09:14:56.508: INFO: Pod "pod-86b63409-57d6-4898-9fb1-bc1f13566b00": Phase="Pending", Reason="", readiness=false. Elapsed: 49.205273ms Nov 16 09:14:58.635: INFO: Pod "pod-86b63409-57d6-4898-9fb1-bc1f13566b00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175885716s Nov 16 09:15:00.713: INFO: Pod "pod-86b63409-57d6-4898-9fb1-bc1f13566b00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.253538818s STEP: Saw pod success Nov 16 09:15:00.713: INFO: Pod "pod-86b63409-57d6-4898-9fb1-bc1f13566b00" satisfied condition "Succeeded or Failed" Nov 16 09:15:00.716: INFO: Trying to get logs from node latest-worker pod pod-86b63409-57d6-4898-9fb1-bc1f13566b00 container test-container: STEP: delete the pod Nov 16 09:15:00.755: INFO: Waiting for pod pod-86b63409-57d6-4898-9fb1-bc1f13566b00 to disappear Nov 16 09:15:00.768: INFO: Pod pod-86b63409-57d6-4898-9fb1-bc1f13566b00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:15:00.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7363" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":920,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:15:00.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 16 09:15:00.861: INFO: Waiting up to 5m0s for pod "downward-api-b45b8bb5-ec32-46c3-8e06-2e73c7fb6534" in namespace "downward-api-3681" to be "Succeeded or Failed" Nov 16 09:15:00.883: INFO: Pod "downward-api-b45b8bb5-ec32-46c3-8e06-2e73c7fb6534": Phase="Pending", Reason="", readiness=false. Elapsed: 21.435598ms Nov 16 09:15:02.934: INFO: Pod "downward-api-b45b8bb5-ec32-46c3-8e06-2e73c7fb6534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072840561s Nov 16 09:15:04.939: INFO: Pod "downward-api-b45b8bb5-ec32-46c3-8e06-2e73c7fb6534": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077930001s STEP: Saw pod success Nov 16 09:15:04.939: INFO: Pod "downward-api-b45b8bb5-ec32-46c3-8e06-2e73c7fb6534" satisfied condition "Succeeded or Failed" Nov 16 09:15:04.942: INFO: Trying to get logs from node latest-worker pod downward-api-b45b8bb5-ec32-46c3-8e06-2e73c7fb6534 container dapi-container: STEP: delete the pod Nov 16 09:15:04.980: INFO: Waiting for pod downward-api-b45b8bb5-ec32-46c3-8e06-2e73c7fb6534 to disappear Nov 16 09:15:05.005: INFO: Pod downward-api-b45b8bb5-ec32-46c3-8e06-2e73c7fb6534 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:15:05.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3681" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":940,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:15:05.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1116 09:15:16.825144 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 16 09:16:18.843: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Nov 16 09:16:18.843: INFO: Deleting pod "simpletest-rc-to-be-deleted-4dclr" in namespace "gc-2009" Nov 16 09:16:18.884: INFO: Deleting pod "simpletest-rc-to-be-deleted-d5lbd" in namespace "gc-2009" Nov 16 09:16:19.010: INFO: Deleting pod "simpletest-rc-to-be-deleted-flnsc" in namespace "gc-2009" Nov 16 09:16:20.334: INFO: Deleting pod "simpletest-rc-to-be-deleted-g55bf" in namespace "gc-2009" Nov 16 09:16:20.921: INFO: Deleting pod "simpletest-rc-to-be-deleted-g8m6x" in namespace "gc-2009" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:16:21.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2009" for this suite. • [SLOW TEST:76.199 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":57,"skipped":1004,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:16:21.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:16:21.992: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:16:24.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114982, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114982, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114982, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741114981, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:16:27.037: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:16:37.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5018" for this suite. STEP: Destroying namespace "webhook-5018-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.113 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":58,"skipped":1024,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:16:37.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:16:37.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d166f13-4e85-4c0b-950e-cc9bff00d636" in namespace "downward-api-8960" to be "Succeeded or Failed" Nov 16 09:16:37.428: INFO: Pod "downwardapi-volume-3d166f13-4e85-4c0b-950e-cc9bff00d636": Phase="Pending", Reason="", readiness=false. Elapsed: 30.310978ms Nov 16 09:16:39.432: INFO: Pod "downwardapi-volume-3d166f13-4e85-4c0b-950e-cc9bff00d636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034044693s Nov 16 09:16:41.436: INFO: Pod "downwardapi-volume-3d166f13-4e85-4c0b-950e-cc9bff00d636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038308287s STEP: Saw pod success Nov 16 09:16:41.436: INFO: Pod "downwardapi-volume-3d166f13-4e85-4c0b-950e-cc9bff00d636" satisfied condition "Succeeded or Failed" Nov 16 09:16:41.439: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3d166f13-4e85-4c0b-950e-cc9bff00d636 container client-container: STEP: delete the pod Nov 16 09:16:41.489: INFO: Waiting for pod downwardapi-volume-3d166f13-4e85-4c0b-950e-cc9bff00d636 to disappear Nov 16 09:16:41.506: INFO: Pod downwardapi-volume-3d166f13-4e85-4c0b-950e-cc9bff00d636 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:16:41.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8960" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":1024,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:16:41.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-f28728ba-137e-4a65-b50c-91f567d9cdd3 STEP: Creating a pod to test consume configMaps Nov 16 09:16:41.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8" in namespace "configmap-5487" to be "Succeeded or Failed" Nov 16 09:16:41.619: INFO: Pod "pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.457231ms Nov 16 09:16:43.627: INFO: Pod "pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029369566s Nov 16 09:16:45.642: INFO: Pod "pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8": Phase="Running", Reason="", readiness=true. Elapsed: 4.044110954s Nov 16 09:16:47.646: INFO: Pod "pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047975162s STEP: Saw pod success Nov 16 09:16:47.646: INFO: Pod "pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8" satisfied condition "Succeeded or Failed" Nov 16 09:16:47.648: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8 container configmap-volume-test: STEP: delete the pod Nov 16 09:16:47.721: INFO: Waiting for pod pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8 to disappear Nov 16 09:16:47.741: INFO: Pod pod-configmaps-1d59e06c-0a1b-478f-9b2e-1507868636a8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:16:47.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5487" for this suite. • [SLOW TEST:6.236 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:16:47.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Nov 16 09:16:47.811: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:16:55.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9879" for this suite. • [SLOW TEST:7.924 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":61,"skipped":1064,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:16:55.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2116 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Nov 16 09:16:55.784: INFO: Found 0 stateful pods, waiting for 3 Nov 16 09:17:05.789: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:17:05.789: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:17:05.789: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Nov 16 09:17:15.789: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:17:15.789: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:17:15.789: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:17:15.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:17:16.068: INFO: stderr: "I1116 09:17:15.938140 611 log.go:181] (0xc000e9d340) (0xc000f8c8c0) Create stream\nI1116 09:17:15.938202 611 log.go:181] (0xc000e9d340) (0xc000f8c8c0) Stream added, broadcasting: 1\nI1116 09:17:15.945019 611 log.go:181] (0xc000e9d340) Reply frame received for 1\nI1116 09:17:15.945057 611 log.go:181] (0xc000e9d340) (0xc000f8c000) Create stream\nI1116 09:17:15.945067 611 log.go:181] (0xc000e9d340) (0xc000f8c000) Stream added, broadcasting: 3\nI1116 09:17:15.946078 611 log.go:181] (0xc000e9d340) Reply frame received for 3\nI1116 09:17:15.946116 611 log.go:181] (0xc000e9d340) (0xc0009e2960) Create stream\nI1116 09:17:15.946128 611 log.go:181] (0xc000e9d340) (0xc0009e2960) Stream added, broadcasting: 5\nI1116 09:17:15.947092 611 log.go:181] (0xc000e9d340) Reply frame received for 5\nI1116 09:17:16.041481 611 log.go:181] (0xc000e9d340) Data frame received for 5\nI1116 09:17:16.041521 611 log.go:181] (0xc0009e2960) (5) Data frame handling\nI1116 09:17:16.041544 611 log.go:181] (0xc0009e2960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:17:16.058093 611 log.go:181] (0xc000e9d340) Data frame received for 3\nI1116 09:17:16.058130 611 log.go:181] (0xc000f8c000) (3) Data frame handling\nI1116 09:17:16.058171 611 log.go:181] (0xc000f8c000) (3) Data frame sent\nI1116 09:17:16.058340 611 log.go:181] (0xc000e9d340) Data frame received for 5\nI1116 09:17:16.058365 611 log.go:181] (0xc0009e2960) (5) Data frame handling\nI1116 09:17:16.058386 611 log.go:181] (0xc000e9d340) Data frame received for 3\nI1116 09:17:16.058399 611 log.go:181] (0xc000f8c000) (3) Data frame handling\nI1116 09:17:16.060745 611 log.go:181] (0xc000e9d340) Data frame received for 1\nI1116 09:17:16.060785 611 log.go:181] (0xc000f8c8c0) (1) Data frame handling\nI1116 09:17:16.060805 611 log.go:181] (0xc000f8c8c0) (1) Data frame sent\nI1116 09:17:16.060822 611 log.go:181] (0xc000e9d340) (0xc000f8c8c0) Stream removed, broadcasting: 1\nI1116 09:17:16.060984 611 log.go:181] (0xc000e9d340) Go away received\nI1116 09:17:16.061424 611 log.go:181] (0xc000e9d340) (0xc000f8c8c0) Stream removed, broadcasting: 1\nI1116 09:17:16.061457 611 log.go:181] (0xc000e9d340) (0xc000f8c000) Stream removed, broadcasting: 3\nI1116 09:17:16.061482 611 log.go:181] (0xc000e9d340) (0xc0009e2960) Stream removed, broadcasting: 5\n" Nov 16 09:17:16.068: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:17:16.068: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Nov 16 09:17:26.101: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Nov 16 09:17:36.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:17:36.964: INFO: stderr: "I1116 09:17:36.862819 629 log.go:181] (0xc000b1d4a0) (0xc000b14960) Create stream\nI1116 09:17:36.862890 629 log.go:181] (0xc000b1d4a0) (0xc000b14960) Stream added, broadcasting: 1\nI1116 09:17:36.866899 629 log.go:181] (0xc000b1d4a0) Reply frame received for 1\nI1116 09:17:36.866936 629 log.go:181] (0xc000b1d4a0) (0xc000b14000) Create stream\nI1116 09:17:36.866947 629 log.go:181] (0xc000b1d4a0) (0xc000b14000) Stream added, broadcasting: 3\nI1116 09:17:36.867792 629 log.go:181] (0xc000b1d4a0) Reply frame received for 3\nI1116 09:17:36.867822 629 log.go:181] (0xc000b1d4a0) (0xc000b140a0) Create stream\nI1116 09:17:36.867832 629 log.go:181] (0xc000b1d4a0) (0xc000b140a0) Stream added, broadcasting: 5\nI1116 09:17:36.868665 629 log.go:181] (0xc000b1d4a0) Reply frame received for 5\nI1116 09:17:36.954460 629 log.go:181] (0xc000b1d4a0) Data frame received for 3\nI1116 09:17:36.954512 629 log.go:181] (0xc000b14000) (3) Data frame handling\nI1116 09:17:36.954552 629 log.go:181] (0xc000b14000) (3) Data frame sent\nI1116 09:17:36.954582 629 log.go:181] (0xc000b1d4a0) Data frame received for 3\nI1116 09:17:36.954596 629 log.go:181] (0xc000b14000) (3) Data frame handling\nI1116 09:17:36.954616 629 log.go:181] (0xc000b1d4a0) Data frame received for 5\nI1116 09:17:36.954626 629 log.go:181] (0xc000b140a0) (5) Data frame handling\nI1116 09:17:36.954639 629 log.go:181] (0xc000b140a0) (5) Data frame sent\nI1116 09:17:36.954655 629 log.go:181] (0xc000b1d4a0) Data frame received for 5\nI1116 09:17:36.954667 629 log.go:181] (0xc000b140a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1116 09:17:36.956359 629 log.go:181] (0xc000b1d4a0) Data frame received for 1\nI1116 09:17:36.956393 629 log.go:181] (0xc000b14960) (1) Data frame handling\nI1116 09:17:36.956416 629 log.go:181] (0xc000b14960) (1) Data frame sent\nI1116 09:17:36.956435 629 log.go:181] (0xc000b1d4a0) (0xc000b14960) Stream removed, broadcasting: 1\nI1116 09:17:36.956461 629 log.go:181] (0xc000b1d4a0) Go away received\nI1116 09:17:36.956971 629 log.go:181] (0xc000b1d4a0) (0xc000b14960) Stream removed, broadcasting: 1\nI1116 09:17:36.956992 629 log.go:181] (0xc000b1d4a0) (0xc000b14000) Stream removed, broadcasting: 3\nI1116 09:17:36.957001 629 log.go:181] (0xc000b1d4a0) (0xc000b140a0) Stream removed, broadcasting: 5\n" Nov 16 09:17:36.964: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:17:36.964: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:17:47.011: INFO: Waiting for StatefulSet statefulset-2116/ss2 to complete update Nov 16 09:17:47.011: INFO: Waiting for Pod statefulset-2116/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 16 09:17:47.012: INFO: Waiting for Pod statefulset-2116/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 16 09:17:57.033: INFO: Waiting for StatefulSet statefulset-2116/ss2 to complete update Nov 16 09:17:57.033: INFO: Waiting for Pod statefulset-2116/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Nov 16 09:18:07.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:18:07.285: INFO: stderr: "I1116 09:18:07.134912 647 log.go:181] (0xc000c93080) (0xc00091abe0) Create stream\nI1116 09:18:07.134971 647 log.go:181] (0xc000c93080) (0xc00091abe0) Stream added, broadcasting: 1\nI1116 09:18:07.137466 647 log.go:181] (0xc000c93080) Reply frame received for 1\nI1116 09:18:07.137520 647 log.go:181] (0xc000c93080) (0xc00091ac80) Create stream\nI1116 09:18:07.137546 647 log.go:181] (0xc000c93080) (0xc00091ac80) Stream added, broadcasting: 3\nI1116 09:18:07.138825 647 log.go:181] (0xc000c93080) Reply frame received for 3\nI1116 09:18:07.138874 647 log.go:181] (0xc000c93080) (0xc00063a320) Create stream\nI1116 09:18:07.138902 647 log.go:181] (0xc000c93080) (0xc00063a320) Stream added, broadcasting: 5\nI1116 09:18:07.139903 647 log.go:181] (0xc000c93080) Reply frame received for 5\nI1116 09:18:07.240190 647 log.go:181] (0xc000c93080) Data frame received for 5\nI1116 09:18:07.240227 647 log.go:181] (0xc00063a320) (5) Data frame handling\nI1116 09:18:07.240249 647 log.go:181] (0xc00063a320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:18:07.274598 647 log.go:181] (0xc000c93080) Data frame received for 3\nI1116 09:18:07.274627 647 log.go:181] (0xc00091ac80) (3) Data frame handling\nI1116 09:18:07.274635 647 log.go:181] (0xc00091ac80) (3) Data frame sent\nI1116 09:18:07.274648 647 log.go:181] (0xc000c93080) Data frame received for 3\nI1116 09:18:07.274654 647 log.go:181] (0xc00091ac80) (3) Data frame handling\nI1116 09:18:07.274889 647 log.go:181] (0xc000c93080) Data frame received for 5\nI1116 09:18:07.274919 647 log.go:181] (0xc00063a320) (5) Data frame handling\nI1116 09:18:07.276594 647 log.go:181] (0xc000c93080) Data frame received for 1\nI1116 09:18:07.276638 647 log.go:181] (0xc00091abe0) (1) Data frame handling\nI1116 09:18:07.276645 647 log.go:181] (0xc00091abe0) (1) Data frame sent\nI1116 09:18:07.276666 647 log.go:181] (0xc000c93080) (0xc00091abe0) Stream removed, broadcasting: 1\nI1116 09:18:07.276677 647 log.go:181] (0xc000c93080) Go away received\nI1116 09:18:07.281035 647 log.go:181] (0xc000c93080) (0xc00091abe0) Stream removed, broadcasting: 1\nI1116 09:18:07.281094 647 log.go:181] (0xc000c93080) (0xc00091ac80) Stream removed, broadcasting: 3\nI1116 09:18:07.281107 647 log.go:181] (0xc000c93080) (0xc00063a320) Stream removed, broadcasting: 5\n" Nov 16 09:18:07.285: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:18:07.285: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:18:17.320: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Nov 16 09:18:27.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:18:27.601: INFO: stderr: "I1116 09:18:27.495833 665 log.go:181] (0xc0006c36b0) (0xc000758780) Create stream\nI1116 09:18:27.495921 665 log.go:181] (0xc0006c36b0) (0xc000758780) Stream added, broadcasting: 1\nI1116 09:18:27.502253 665 log.go:181] (0xc0006c36b0) Reply frame received for 1\nI1116 09:18:27.502315 665 log.go:181] (0xc0006c36b0) (0xc000758820) Create stream\nI1116 09:18:27.502337 665 log.go:181] (0xc0006c36b0) (0xc000758820) Stream added, broadcasting: 3\nI1116 09:18:27.503494 665 log.go:181] (0xc0006c36b0) Reply frame received for 3\nI1116 09:18:27.503527 665 log.go:181] (0xc0006c36b0) (0xc0007588c0) Create stream\nI1116 09:18:27.503537 665 log.go:181] (0xc0006c36b0) (0xc0007588c0) Stream added, broadcasting: 5\nI1116 09:18:27.504349 665 log.go:181] (0xc0006c36b0) Reply frame received for 5\nI1116 09:18:27.593864 665 log.go:181] (0xc0006c36b0) Data frame received for 5\nI1116 09:18:27.593905 665 log.go:181] (0xc0007588c0) (5) Data frame handling\nI1116 09:18:27.593920 665 log.go:181] (0xc0007588c0) (5) Data frame sent\nI1116 09:18:27.593929 665 log.go:181] (0xc0006c36b0) Data frame received for 5\nI1116 09:18:27.593936 665 log.go:181] (0xc0007588c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1116 09:18:27.593960 665 log.go:181] (0xc0006c36b0) Data frame received for 3\nI1116 09:18:27.593968 665 log.go:181] (0xc000758820) (3) Data frame handling\nI1116 09:18:27.593983 665 log.go:181] (0xc000758820) (3) Data frame sent\nI1116 09:18:27.593997 665 log.go:181] (0xc0006c36b0) Data frame received for 3\nI1116 09:18:27.594005 665 log.go:181] (0xc000758820) (3) Data frame handling\nI1116 09:18:27.595191 665 log.go:181] (0xc0006c36b0) Data frame received for 1\nI1116 09:18:27.595203 665 log.go:181] (0xc000758780) (1) Data frame handling\nI1116 09:18:27.595209 665 log.go:181] (0xc000758780) (1) Data frame sent\nI1116 09:18:27.595218 665 log.go:181] (0xc0006c36b0) (0xc000758780) Stream removed, broadcasting: 1\nI1116 09:18:27.595234 665 log.go:181] (0xc0006c36b0) Go away received\nI1116 09:18:27.596354 665 log.go:181] (0xc0006c36b0) (0xc000758780) Stream removed, broadcasting: 1\nI1116 09:18:27.596422 665 log.go:181] (0xc0006c36b0) (0xc000758820) Stream removed, broadcasting: 3\nI1116 09:18:27.596440 665 log.go:181] (0xc0006c36b0) (0xc0007588c0) Stream removed, broadcasting: 5\n" Nov 16 09:18:27.601: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:18:27.601: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:18:37.621: INFO: Waiting for StatefulSet statefulset-2116/ss2 to complete update Nov 16 09:18:37.621: INFO: Waiting for Pod statefulset-2116/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Nov 16 09:18:37.621: INFO: Waiting for Pod statefulset-2116/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Nov 16 09:18:47.629: INFO: Waiting for StatefulSet statefulset-2116/ss2 to complete update Nov 16 09:18:47.629: INFO: Waiting for Pod statefulset-2116/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 16 09:18:57.629: INFO: Deleting all statefulset in ns statefulset-2116 Nov 16 09:18:57.631: INFO: Scaling statefulset ss2 to 0 Nov 16 09:19:17.652: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 09:19:17.655: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:19:17.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2116" for this suite. • [SLOW TEST:142.004 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":62,"skipped":1079,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:19:17.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Nov 16 09:19:17.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6304' Nov 16 09:19:18.120: INFO: stderr: "" Nov 16 09:19:18.120: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 16 09:19:18.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6304' Nov 16 09:19:18.267: INFO: stderr: "" Nov 16 09:19:18.267: INFO: stdout: "update-demo-nautilus-7klfw update-demo-nautilus-xnh6v " Nov 16 09:19:18.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7klfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:18.376: INFO: stderr: "" Nov 16 09:19:18.376: INFO: stdout: "" Nov 16 09:19:18.377: INFO: update-demo-nautilus-7klfw is created but not running Nov 16 09:19:23.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6304' Nov 16 09:19:23.486: INFO: stderr: "" Nov 16 09:19:23.486: INFO: stdout: "update-demo-nautilus-7klfw update-demo-nautilus-xnh6v " Nov 16 09:19:23.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7klfw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:23.577: INFO: stderr: "" Nov 16 09:19:23.577: INFO: stdout: "true" Nov 16 09:19:23.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7klfw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:23.674: INFO: stderr: "" Nov 16 09:19:23.674: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 16 09:19:23.674: INFO: validating pod update-demo-nautilus-7klfw Nov 16 09:19:23.697: INFO: got data: { "image": "nautilus.jpg" } Nov 16 09:19:23.697: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 16 09:19:23.697: INFO: update-demo-nautilus-7klfw is verified up and running Nov 16 09:19:23.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnh6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:23.799: INFO: stderr: "" Nov 16 09:19:23.800: INFO: stdout: "true" Nov 16 09:19:23.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnh6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:23.939: INFO: stderr: "" Nov 16 09:19:23.939: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 16 09:19:23.939: INFO: validating pod update-demo-nautilus-xnh6v Nov 16 09:19:23.947: INFO: got data: { "image": "nautilus.jpg" } Nov 16 09:19:23.947: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 16 09:19:23.947: INFO: update-demo-nautilus-xnh6v is verified up and running STEP: scaling down the replication controller Nov 16 09:19:23.955: INFO: scanned /root for discovery docs: Nov 16 09:19:23.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6304' Nov 16 09:19:25.074: INFO: stderr: "" Nov 16 09:19:25.074: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 16 09:19:25.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6304' Nov 16 09:19:25.174: INFO: stderr: "" Nov 16 09:19:25.174: INFO: stdout: "update-demo-nautilus-7klfw update-demo-nautilus-xnh6v " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 16 09:19:30.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6304' Nov 16 09:19:30.292: INFO: stderr: "" Nov 16 09:19:30.292: INFO: stdout: "update-demo-nautilus-7klfw update-demo-nautilus-xnh6v " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 16 09:19:35.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6304' Nov 16 09:19:37.026: INFO: stderr: "" Nov 16 09:19:37.026: INFO: stdout: "update-demo-nautilus-7klfw update-demo-nautilus-xnh6v " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 16 09:19:42.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6304' Nov 16 09:19:42.152: INFO: stderr: "" Nov 16 09:19:42.152: INFO: stdout: "update-demo-nautilus-xnh6v " Nov 16 09:19:42.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnh6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:42.258: INFO: stderr: "" Nov 16 09:19:42.258: INFO: stdout: "true" Nov 16 09:19:42.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnh6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:42.355: INFO: stderr: "" Nov 16 09:19:42.355: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 16 09:19:42.355: INFO: validating pod update-demo-nautilus-xnh6v Nov 16 09:19:42.359: INFO: got data: { "image": "nautilus.jpg" } Nov 16 09:19:42.359: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 16 09:19:42.359: INFO: update-demo-nautilus-xnh6v is verified up and running STEP: scaling up the replication controller Nov 16 09:19:42.362: INFO: scanned /root for discovery docs: Nov 16 09:19:42.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6304' Nov 16 09:19:43.533: INFO: stderr: "" Nov 16 09:19:43.533: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 16 09:19:43.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6304' Nov 16 09:19:43.633: INFO: stderr: "" Nov 16 09:19:43.633: INFO: stdout: "update-demo-nautilus-xnh6v update-demo-nautilus-zcdj2 " Nov 16 09:19:43.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnh6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:43.732: INFO: stderr: "" Nov 16 09:19:43.732: INFO: stdout: "true" Nov 16 09:19:43.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnh6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:43.896: INFO: stderr: "" Nov 16 09:19:43.896: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 16 09:19:43.896: INFO: validating pod update-demo-nautilus-xnh6v Nov 16 09:19:43.900: INFO: got data: { "image": "nautilus.jpg" } Nov 16 09:19:43.900: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 16 09:19:43.900: INFO: update-demo-nautilus-xnh6v is verified up and running Nov 16 09:19:43.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zcdj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:44.010: INFO: stderr: "" Nov 16 09:19:44.010: INFO: stdout: "" Nov 16 09:19:44.010: INFO: update-demo-nautilus-zcdj2 is created but not running Nov 16 09:19:49.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6304' Nov 16 09:19:49.131: INFO: stderr: "" Nov 16 09:19:49.131: INFO: stdout: "update-demo-nautilus-xnh6v update-demo-nautilus-zcdj2 " Nov 16 09:19:49.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnh6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:49.232: INFO: stderr: "" Nov 16 09:19:49.232: INFO: stdout: "true" Nov 16 09:19:49.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnh6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:49.325: INFO: stderr: "" Nov 16 09:19:49.325: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 16 09:19:49.325: INFO: validating pod update-demo-nautilus-xnh6v Nov 16 09:19:49.328: INFO: got data: { "image": "nautilus.jpg" } Nov 16 09:19:49.328: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 16 09:19:49.328: INFO: update-demo-nautilus-xnh6v is verified up and running Nov 16 09:19:49.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zcdj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:49.487: INFO: stderr: "" Nov 16 09:19:49.487: INFO: stdout: "true" Nov 16 09:19:49.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zcdj2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6304' Nov 16 09:19:49.603: INFO: stderr: "" Nov 16 09:19:49.603: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 16 09:19:49.603: INFO: validating pod update-demo-nautilus-zcdj2 Nov 16 09:19:49.607: INFO: got data: { "image": "nautilus.jpg" } Nov 16 09:19:49.607: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 16 09:19:49.607: INFO: update-demo-nautilus-zcdj2 is verified up and running STEP: using delete to clean up resources Nov 16 09:19:49.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6304' Nov 16 09:19:49.710: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 09:19:49.710: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 16 09:19:49.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6304' Nov 16 09:19:49.829: INFO: stderr: "No resources found in kubectl-6304 namespace.\n" Nov 16 09:19:49.829: INFO: stdout: "" Nov 16 09:19:49.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6304 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 16 09:19:49.927: INFO: stderr: "" Nov 16 09:19:49.927: INFO: stdout: "update-demo-nautilus-xnh6v\nupdate-demo-nautilus-zcdj2\n" Nov 16 09:19:50.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6304' Nov 16 09:19:50.531: INFO: stderr: "No resources found in kubectl-6304 namespace.\n" Nov 16 09:19:50.531: INFO: stdout: "" Nov 16 09:19:50.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6304 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 16 09:19:50.640: INFO: stderr: "" Nov 16 09:19:50.640: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:19:50.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6304" for this suite. • [SLOW TEST:32.969 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":63,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:19:50.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-66d35cf0-d925-4df0-9cce-319e909856b2 STEP: Creating a pod to test consume secrets Nov 16 09:19:50.891: INFO: Waiting up to 5m0s for pod "pod-secrets-208c7d3f-3d01-4458-aa96-3af51e392838" in namespace "secrets-1176" to be "Succeeded or Failed" Nov 16 09:19:50.910: INFO: Pod "pod-secrets-208c7d3f-3d01-4458-aa96-3af51e392838": Phase="Pending", Reason="", readiness=false. Elapsed: 18.863245ms Nov 16 09:19:52.915: INFO: Pod "pod-secrets-208c7d3f-3d01-4458-aa96-3af51e392838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023933309s Nov 16 09:19:54.920: INFO: Pod "pod-secrets-208c7d3f-3d01-4458-aa96-3af51e392838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028534491s STEP: Saw pod success Nov 16 09:19:54.920: INFO: Pod "pod-secrets-208c7d3f-3d01-4458-aa96-3af51e392838" satisfied condition "Succeeded or Failed" Nov 16 09:19:54.923: INFO: Trying to get logs from node latest-worker pod pod-secrets-208c7d3f-3d01-4458-aa96-3af51e392838 container secret-volume-test: STEP: delete the pod Nov 16 09:19:54.957: INFO: Waiting for pod pod-secrets-208c7d3f-3d01-4458-aa96-3af51e392838 to disappear Nov 16 09:19:54.963: INFO: Pod pod-secrets-208c7d3f-3d01-4458-aa96-3af51e392838 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:19:54.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1176" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":1114,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:19:54.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:19:55.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config version' Nov 16 09:19:55.270: INFO: stderr: "" Nov 16 09:19:55.270: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.5-rc.0\", GitCommit:\"9546a0e88d62afd8fdf50c4ed91514d5192db450\", GitTreeState:\"clean\", BuildDate:\"2020-11-11T13:36:54Z\", GoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:19:55.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4600" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":65,"skipped":1126,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:19:55.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Nov 16 09:19:55.399: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:03.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1213" for this suite. • [SLOW TEST:7.771 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":66,"skipped":1142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:03.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:20:03.138: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:04.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2999" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":67,"skipped":1168,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:04.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 16 09:20:04.749: INFO: starting watch STEP: patching STEP: updating Nov 16 09:20:04.786: INFO: waiting for watch events with expected annotations Nov 16 09:20:04.786: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:04.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-9211" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":68,"skipped":1185,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:04.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Nov 16 09:20:09.098: INFO: Pod pod-hostip-bd0652ce-dd76-410c-b3d1-b30fe4bcf7bd has hostIP: 172.18.0.15 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:09.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4508" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":69,"skipped":1205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:09.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5kgwb in namespace proxy-2720 I1116 09:20:09.218752 7 runners.go:190] Created replication controller with name: proxy-service-5kgwb, namespace: proxy-2720, replica count: 1 I1116 09:20:10.269128 7 runners.go:190] proxy-service-5kgwb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:20:11.269307 7 runners.go:190] proxy-service-5kgwb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:20:12.269574 7 runners.go:190] proxy-service-5kgwb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1116 09:20:13.269835 7 runners.go:190] proxy-service-5kgwb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1116 09:20:14.270103 7 runners.go:190] proxy-service-5kgwb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:20:14.314: INFO: setup took 5.158562113s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Nov 16 09:20:14.321: INFO: (0) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 6.954061ms) Nov 16 09:20:14.325: INFO: (0) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 10.41133ms) Nov 16 09:20:14.326: INFO: (0) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 11.219311ms) Nov 16 09:20:14.326: INFO: (0) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 11.608531ms) Nov 16 09:20:14.326: INFO: (0) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 11.531224ms) Nov 16 09:20:14.326: INFO: (0) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 11.565289ms) Nov 16 09:20:14.326: INFO: (0) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 11.469886ms) Nov 16 09:20:14.326: INFO: (0) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname2/proxy/: bar (200; 11.431847ms) Nov 16 09:20:14.326: INFO: (0) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 11.623501ms) Nov 16 09:20:14.326: INFO: (0) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 12.154488ms) Nov 16 09:20:14.328: INFO: (0) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 14.052898ms) Nov 16 09:20:14.333: INFO: (0) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 19.167555ms) Nov 16 09:20:14.333: INFO: (0) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test (200; 5.130347ms) Nov 16 09:20:14.339: INFO: (1) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 5.17929ms) Nov 16 09:20:14.339: INFO: (1) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 5.29077ms) Nov 16 09:20:14.339: INFO: (1) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 5.27131ms) Nov 16 09:20:14.339: INFO: (1) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 5.320714ms) Nov 16 09:20:14.339: INFO: (1) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 5.578432ms) Nov 16 09:20:14.356: INFO: (2) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 16.242228ms) Nov 16 09:20:14.356: INFO: (2) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 16.249012ms) Nov 16 09:20:14.356: INFO: (2) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 16.342308ms) Nov 16 09:20:14.356: INFO: (2) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 16.494991ms) Nov 16 09:20:14.356: INFO: (2) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 16.527089ms) Nov 16 09:20:14.356: INFO: (2) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 16.58703ms) Nov 16 09:20:14.356: INFO: (2) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 19.307634ms) Nov 16 09:20:14.359: INFO: (2) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 19.341372ms) Nov 16 09:20:14.359: INFO: (2) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 19.539795ms) Nov 16 09:20:14.363: INFO: (3) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 4.371425ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 4.070894ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 4.607697ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 4.529461ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 3.778284ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 3.982543ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 4.863291ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 4.53522ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 3.630115ms) Nov 16 09:20:14.364: INFO: (3) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 3.796914ms) Nov 16 09:20:14.370: INFO: (4) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 3.800608ms) Nov 16 09:20:14.370: INFO: (4) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 4.198962ms) Nov 16 09:20:14.370: INFO: (4) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 4.149658ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 5.192069ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 5.089899ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname2/proxy/: tls qux (200; 5.170429ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 5.098415ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 5.215575ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 5.16287ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname2/proxy/: bar (200; 5.211723ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 5.341921ms) Nov 16 09:20:14.371: INFO: (4) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 3.509805ms) Nov 16 09:20:14.375: INFO: (5) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 3.54327ms) Nov 16 09:20:14.375: INFO: (5) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 3.556736ms) Nov 16 09:20:14.375: INFO: (5) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 3.614468ms) Nov 16 09:20:14.376: INFO: (5) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 4.530948ms) Nov 16 09:20:14.376: INFO: (5) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 4.526993ms) Nov 16 09:20:14.376: INFO: (5) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: ... (200; 4.935033ms) Nov 16 09:20:14.380: INFO: (6) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 3.676099ms) Nov 16 09:20:14.380: INFO: (6) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname2/proxy/: tls qux (200; 4.271256ms) Nov 16 09:20:14.380: INFO: (6) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 4.268112ms) Nov 16 09:20:14.380: INFO: (6) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 5.093117ms) Nov 16 09:20:14.381: INFO: (6) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 5.153574ms) Nov 16 09:20:14.381: INFO: (6) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 5.017516ms) Nov 16 09:20:14.381: INFO: (6) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 5.137688ms) Nov 16 09:20:14.381: INFO: (6) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 5.071131ms) Nov 16 09:20:14.381: INFO: (6) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 5.242742ms) Nov 16 09:20:14.384: INFO: (7) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 3.496575ms) Nov 16 09:20:14.385: INFO: (7) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 3.761388ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 4.199425ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname2/proxy/: bar (200; 4.573265ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 4.59918ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 4.628014ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 4.676124ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname2/proxy/: tls qux (200; 4.652687ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 4.75423ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 4.656497ms) Nov 16 09:20:14.386: INFO: (7) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 4.729837ms) Nov 16 09:20:14.389: INFO: (8) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 2.322187ms) Nov 16 09:20:14.389: INFO: (8) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 2.293509ms) Nov 16 09:20:14.391: INFO: (8) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 4.182474ms) Nov 16 09:20:14.391: INFO: (8) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname2/proxy/: bar (200; 4.484815ms) Nov 16 09:20:14.391: INFO: (8) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 4.65151ms) Nov 16 09:20:14.391: INFO: (8) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 4.907795ms) Nov 16 09:20:14.391: INFO: (8) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test (200; 4.756703ms) Nov 16 09:20:14.391: INFO: (8) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 5.025062ms) Nov 16 09:20:14.391: INFO: (8) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 5.084608ms) Nov 16 09:20:14.392: INFO: (8) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 5.49026ms) Nov 16 09:20:14.392: INFO: (8) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 5.386734ms) Nov 16 09:20:14.392: INFO: (8) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname2/proxy/: tls qux (200; 5.391606ms) Nov 16 09:20:14.392: INFO: (8) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 5.275426ms) Nov 16 09:20:14.392: INFO: (8) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 5.307873ms) Nov 16 09:20:14.392: INFO: (8) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 5.274409ms) Nov 16 09:20:14.395: INFO: (9) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 3.388097ms) Nov 16 09:20:14.395: INFO: (9) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 3.392504ms) Nov 16 09:20:14.395: INFO: (9) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 3.391905ms) Nov 16 09:20:14.395: INFO: (9) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 3.442617ms) Nov 16 09:20:14.396: INFO: (9) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test (200; 4.863259ms) Nov 16 09:20:14.397: INFO: (9) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 4.940194ms) Nov 16 09:20:14.397: INFO: (9) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 4.891049ms) Nov 16 09:20:14.397: INFO: (9) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 4.97252ms) Nov 16 09:20:14.397: INFO: (9) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 5.321926ms) Nov 16 09:20:14.401: INFO: (10) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 3.208363ms) Nov 16 09:20:14.401: INFO: (10) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 3.427457ms) Nov 16 09:20:14.401: INFO: (10) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 3.618779ms) Nov 16 09:20:14.401: INFO: (10) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 3.578731ms) Nov 16 09:20:14.401: INFO: (10) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 3.631607ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname2/proxy/: bar (200; 4.13075ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 4.031771ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 4.479913ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 4.398177ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname2/proxy/: tls qux (200; 4.473625ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 4.81957ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 4.92868ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 4.784134ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 4.737157ms) Nov 16 09:20:14.402: INFO: (10) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 4.481807ms) Nov 16 09:20:14.406: INFO: (11) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 3.551102ms) Nov 16 09:20:14.406: INFO: (11) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 3.713026ms) Nov 16 09:20:14.406: INFO: (11) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 3.717755ms) Nov 16 09:20:14.406: INFO: (11) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 3.802935ms) Nov 16 09:20:14.406: INFO: (11) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 3.853489ms) Nov 16 09:20:14.406: INFO: (11) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 3.801765ms) Nov 16 09:20:14.406: INFO: (11) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 3.822402ms) Nov 16 09:20:14.406: INFO: (11) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 4.595355ms) Nov 16 09:20:14.457: INFO: (12) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 4.759449ms) Nov 16 09:20:14.457: INFO: (12) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 4.87999ms) Nov 16 09:20:14.457: INFO: (12) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 5.004933ms) Nov 16 09:20:14.457: INFO: (12) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 5.04657ms) Nov 16 09:20:14.457: INFO: (12) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 5.152599ms) Nov 16 09:20:14.457: INFO: (12) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 5.068939ms) Nov 16 09:20:14.457: INFO: (12) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 5.094001ms) Nov 16 09:20:14.457: INFO: (12) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 7.423383ms) Nov 16 09:20:14.467: INFO: (13) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 7.527656ms) Nov 16 09:20:14.467: INFO: (13) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 7.454533ms) Nov 16 09:20:14.467: INFO: (13) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname2/proxy/: bar (200; 7.641416ms) Nov 16 09:20:14.467: INFO: (13) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 7.77274ms) Nov 16 09:20:14.469: INFO: (13) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 9.03845ms) Nov 16 09:20:14.469: INFO: (13) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 9.519471ms) Nov 16 09:20:14.469: INFO: (13) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 9.67113ms) Nov 16 09:20:14.469: INFO: (13) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 9.712647ms) Nov 16 09:20:14.469: INFO: (13) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test<... (200; 8.537674ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 8.579268ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 8.651667ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 8.585733ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 8.629428ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 8.683948ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 8.660636ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 8.522947ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 8.789943ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test (200; 8.763856ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 9.179545ms) Nov 16 09:20:14.689: INFO: (14) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 9.116909ms) Nov 16 09:20:14.690: INFO: (14) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 9.131615ms) Nov 16 09:20:14.695: INFO: (15) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 4.974844ms) Nov 16 09:20:14.695: INFO: (15) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 5.137315ms) Nov 16 09:20:14.695: INFO: (15) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 5.362934ms) Nov 16 09:20:14.695: INFO: (15) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 5.370767ms) Nov 16 09:20:14.695: INFO: (15) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 5.346208ms) Nov 16 09:20:14.695: INFO: (15) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test (200; 6.87285ms) Nov 16 09:20:14.703: INFO: (16) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 6.93813ms) Nov 16 09:20:14.703: INFO: (16) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 7.451127ms) Nov 16 09:20:14.704: INFO: (16) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 7.460854ms) Nov 16 09:20:14.704: INFO: (16) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 7.544094ms) Nov 16 09:20:14.704: INFO: (16) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 7.500331ms) Nov 16 09:20:14.704: INFO: (16) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 7.565858ms) Nov 16 09:20:14.704: INFO: (16) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 7.603384ms) Nov 16 09:20:14.707: INFO: (17) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 3.471381ms) Nov 16 09:20:14.708: INFO: (17) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 3.932077ms) Nov 16 09:20:14.708: INFO: (17) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 4.043516ms) Nov 16 09:20:14.708: INFO: (17) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 3.94253ms) Nov 16 09:20:14.708: INFO: (17) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 3.957115ms) Nov 16 09:20:14.708: INFO: (17) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 4.420985ms) Nov 16 09:20:14.708: INFO: (17) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 4.541112ms) Nov 16 09:20:14.708: INFO: (17) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: ... (200; 5.219007ms) Nov 16 09:20:14.709: INFO: (17) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname2/proxy/: tls qux (200; 5.177334ms) Nov 16 09:20:14.709: INFO: (17) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 5.194599ms) Nov 16 09:20:14.709: INFO: (17) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 5.227169ms) Nov 16 09:20:14.709: INFO: (17) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 5.290621ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 5.769491ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname2/proxy/: tls qux (200; 5.820306ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 5.867549ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 5.923069ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname2/proxy/: bar (200; 5.897782ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 5.858268ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 5.87568ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 5.926162ms) Nov 16 09:20:14.715: INFO: (18) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: test (200; 6.394936ms) Nov 16 09:20:14.716: INFO: (18) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:1080/proxy/: ... (200; 6.513053ms) Nov 16 09:20:14.716: INFO: (18) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 6.533233ms) Nov 16 09:20:14.716: INFO: (18) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 6.579669ms) Nov 16 09:20:14.716: INFO: (18) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 6.514463ms) Nov 16 09:20:14.716: INFO: (18) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 6.632276ms) Nov 16 09:20:14.716: INFO: (18) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 6.510293ms) Nov 16 09:20:14.720: INFO: (19) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:160/proxy/: foo (200; 4.237638ms) Nov 16 09:20:14.720: INFO: (19) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname1/proxy/: tls baz (200; 4.773581ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname1/proxy/: foo (200; 5.055138ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/services/http:proxy-service-5kgwb:portname2/proxy/: bar (200; 5.125978ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname1/proxy/: foo (200; 5.056777ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/pods/http:proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 5.203521ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:162/proxy/: bar (200; 5.154963ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt:1080/proxy/: test<... (200; 5.404866ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/services/https:proxy-service-5kgwb:tlsportname2/proxy/: tls qux (200; 5.357727ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/services/proxy-service-5kgwb:portname2/proxy/: bar (200; 5.35459ms) Nov 16 09:20:14.721: INFO: (19) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:460/proxy/: tls baz (200; 5.348728ms) Nov 16 09:20:14.722: INFO: (19) /api/v1/namespaces/proxy-2720/pods/proxy-service-5kgwb-mnfpt/proxy/: test (200; 5.761782ms) Nov 16 09:20:14.722: INFO: (19) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:462/proxy/: tls qux (200; 5.67331ms) Nov 16 09:20:14.722: INFO: (19) /api/v1/namespaces/proxy-2720/pods/https:proxy-service-5kgwb-mnfpt:443/proxy/: ... (200; 5.956859ms) STEP: deleting ReplicationController proxy-service-5kgwb in namespace proxy-2720, will wait for the garbage collector to delete the pods Nov 16 09:20:14.780: INFO: Deleting ReplicationController proxy-service-5kgwb took: 5.776119ms Nov 16 09:20:14.880: INFO: Terminating ReplicationController proxy-service-5kgwb pods took: 100.205148ms [AfterEach] version v1 /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:25.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2720" for this suite. • [SLOW TEST:16.599 seconds] [sig-network] Proxy /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":70,"skipped":1281,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:25.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Nov 16 09:20:25.757: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:41.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5706" for this suite. • [SLOW TEST:16.296 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":71,"skipped":1292,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:42.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:20:42.046: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:46.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6110" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1293,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:46.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Nov 16 09:20:46.314: INFO: created test-event-1 Nov 16 09:20:46.326: INFO: created test-event-2 Nov 16 09:20:46.392: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Nov 16 09:20:46.422: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Nov 16 09:20:46.442: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:46.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6841" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":73,"skipped":1298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:46.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-5361 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5361 to expose endpoints map[] Nov 16 09:20:46.588: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Nov 16 09:20:47.597: INFO: successfully validated that service multi-endpoint-test in namespace services-5361 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-5361 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5361 to expose endpoints map[pod1:[100]] Nov 16 09:20:51.648: INFO: successfully validated that service multi-endpoint-test in namespace services-5361 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-5361 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5361 to expose endpoints map[pod1:[100] pod2:[101]] Nov 16 09:20:56.722: INFO: Unexpected endpoints: found map[11baa4a8-e7fa-4723-b3b6-5dbfb1ca3073:[100]], expected map[pod1:[100] pod2:[101]], will retry Nov 16 09:20:57.728: INFO: successfully validated that service multi-endpoint-test in namespace services-5361 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-5361 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5361 to expose endpoints map[pod2:[101]] Nov 16 09:20:57.796: INFO: successfully validated that service multi-endpoint-test in namespace services-5361 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-5361 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5361 to expose endpoints map[] Nov 16 09:20:58.830: INFO: successfully validated that service multi-endpoint-test in namespace services-5361 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:20:58.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5361" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.413 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":74,"skipped":1336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:20:58.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7281 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-7281 Nov 16 09:20:58.982: INFO: Found 0 stateful pods, waiting for 1 Nov 16 09:21:08.987: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 16 09:21:09.012: INFO: Deleting all statefulset in ns statefulset-7281 Nov 16 09:21:09.029: INFO: Scaling statefulset ss to 0 Nov 16 09:21:19.136: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 09:21:19.139: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:21:19.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7281" for this suite. • [SLOW TEST:20.301 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":75,"skipped":1363,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:21:19.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:21:19.315: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Pending, waiting for it to be Running (with Ready = true) Nov 16 09:21:21.362: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Pending, waiting for it to be Running (with Ready = true) Nov 16 09:21:23.345: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = false) Nov 16 09:21:25.350: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = false) Nov 16 09:21:27.319: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = false) Nov 16 09:21:29.344: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = false) Nov 16 09:21:31.350: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = false) Nov 16 09:21:33.320: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = false) Nov 16 09:21:35.332: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = false) Nov 16 09:21:37.321: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = false) Nov 16 09:21:39.319: INFO: The status of Pod test-webserver-9f18a380-4c39-4a33-95ab-0ec58fcdfd4c is Running (Ready = true) Nov 16 09:21:39.321: INFO: Container started at 2020-11-16 09:21:22 +0000 UTC, pod became ready at 2020-11-16 09:21:39 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:21:39.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9034" for this suite. • [SLOW TEST:20.161 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":76,"skipped":1435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:21:39.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Nov 16 09:21:39.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-1537 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Nov 16 09:21:44.401: INFO: stderr: "" Nov 16 09:21:44.401: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Nov 16 09:21:44.401: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Nov 16 09:21:44.401: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1537" to be "running and ready, or succeeded" Nov 16 09:21:44.457: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 55.295614ms Nov 16 09:21:46.530: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12880646s Nov 16 09:21:48.560: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.158730931s Nov 16 09:21:48.560: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Nov 16 09:21:48.560: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Nov 16 09:21:48.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1537' Nov 16 09:21:48.724: INFO: stderr: "" Nov 16 09:21:48.724: INFO: stdout: "I1116 09:21:46.879585 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/c5m 444\nI1116 09:21:47.079808 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/swxq 260\nI1116 09:21:47.279766 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/2v9 556\nI1116 09:21:47.479777 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/sl7 381\nI1116 09:21:47.679768 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/l767 353\nI1116 09:21:47.879773 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/tskz 532\nI1116 09:21:48.079797 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/2tx 423\nI1116 09:21:48.279729 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/sbpq 470\nI1116 09:21:48.479818 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/djn 489\nI1116 09:21:48.679723 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/xfw 321\n" STEP: limiting log lines Nov 16 09:21:48.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1537 --tail=1' Nov 16 09:21:48.859: INFO: stderr: "" Nov 16 09:21:48.859: INFO: stdout: "I1116 09:21:48.679723 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/xfw 321\n" Nov 16 09:21:48.859: INFO: got output "I1116 09:21:48.679723 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/xfw 321\n" STEP: limiting log bytes Nov 16 09:21:48.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1537 --limit-bytes=1' Nov 16 09:21:48.977: INFO: stderr: "" Nov 16 09:21:48.977: INFO: stdout: "I" Nov 16 09:21:48.977: INFO: got output "I" STEP: exposing timestamps Nov 16 09:21:48.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1537 --tail=1 --timestamps' Nov 16 09:21:49.102: INFO: stderr: "" Nov 16 09:21:49.102: INFO: stdout: "2020-11-16T09:21:49.079879303Z I1116 09:21:49.079720 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/m4r 527\n" Nov 16 09:21:49.102: INFO: got output "2020-11-16T09:21:49.079879303Z I1116 09:21:49.079720 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/m4r 527\n" STEP: restricting to a time range Nov 16 09:21:51.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1537 --since=1s' Nov 16 09:21:51.722: INFO: stderr: "" Nov 16 09:21:51.722: INFO: stdout: "I1116 09:21:50.879656 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/zjc 529\nI1116 09:21:51.079770 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/f6bx 349\nI1116 09:21:51.279762 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/vnn 427\nI1116 09:21:51.479760 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/bq5h 558\nI1116 09:21:51.679778 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/6vn 419\n" Nov 16 09:21:51.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1537 --since=24h' Nov 16 09:21:51.840: INFO: stderr: "" Nov 16 09:21:51.841: INFO: stdout: "I1116 09:21:46.879585 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/c5m 444\nI1116 09:21:47.079808 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/swxq 260\nI1116 09:21:47.279766 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/2v9 556\nI1116 09:21:47.479777 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/sl7 381\nI1116 09:21:47.679768 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/l767 353\nI1116 09:21:47.879773 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/tskz 532\nI1116 09:21:48.079797 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/2tx 423\nI1116 09:21:48.279729 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/sbpq 470\nI1116 09:21:48.479818 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/djn 489\nI1116 09:21:48.679723 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/xfw 321\nI1116 09:21:48.879691 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/tln 540\nI1116 09:21:49.079720 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/m4r 527\nI1116 09:21:49.279787 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/tb84 429\nI1116 09:21:49.479742 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/qgq 581\nI1116 09:21:49.679683 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/7nn 456\nI1116 09:21:49.879729 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/gkrd 389\nI1116 09:21:50.079728 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/b2sv 421\nI1116 09:21:50.279695 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/k6w 411\nI1116 09:21:50.479761 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/gvr 321\nI1116 09:21:50.679757 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/q7b9 403\nI1116 09:21:50.879656 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/zjc 529\nI1116 09:21:51.079770 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/f6bx 349\nI1116 09:21:51.279762 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/vnn 427\nI1116 09:21:51.479760 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/bq5h 558\nI1116 09:21:51.679778 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/6vn 419\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Nov 16 09:21:51.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1537' Nov 16 09:21:54.287: INFO: stderr: "" Nov 16 09:21:54.287: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:21:54.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1537" for this suite. • [SLOW TEST:14.965 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":77,"skipped":1496,"failed":0} [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:21:54.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:21:56.765: INFO: Create a RollingUpdate DaemonSet Nov 16 09:21:56.860: INFO: Check that daemon pods launch on every node of the cluster Nov 16 09:21:56.891: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:21:56.902: INFO: Number of nodes with available pods: 0 Nov 16 09:21:56.902: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:21:57.907: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:21:57.911: INFO: Number of nodes with available pods: 0 Nov 16 09:21:57.911: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:21:59.415: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:21:59.419: INFO: Number of nodes with available pods: 0 Nov 16 09:21:59.419: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:21:59.906: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:21:59.909: INFO: Number of nodes with available pods: 0 Nov 16 09:21:59.909: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:22:00.907: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:22:00.910: INFO: Number of nodes with available pods: 0 Nov 16 09:22:00.910: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:22:01.914: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:22:01.918: INFO: Number of nodes with available pods: 2 Nov 16 09:22:01.918: INFO: Number of running nodes: 2, number of available pods: 2 Nov 16 09:22:01.918: INFO: Update the DaemonSet to trigger a rollout Nov 16 09:22:01.926: INFO: Updating DaemonSet daemon-set Nov 16 09:22:15.951: INFO: Roll back the DaemonSet before rollout is complete Nov 16 09:22:15.960: INFO: Updating DaemonSet daemon-set Nov 16 09:22:15.960: INFO: Make sure DaemonSet rollback is complete Nov 16 09:22:15.978: INFO: Wrong image for pod: daemon-set-7thwm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Nov 16 09:22:15.979: INFO: Pod daemon-set-7thwm is not available Nov 16 09:22:15.997: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:22:17.003: INFO: Wrong image for pod: daemon-set-7thwm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Nov 16 09:22:17.003: INFO: Pod daemon-set-7thwm is not available Nov 16 09:22:17.008: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:22:18.244: INFO: Wrong image for pod: daemon-set-7thwm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Nov 16 09:22:18.244: INFO: Pod daemon-set-7thwm is not available Nov 16 09:22:18.248: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:22:19.002: INFO: Wrong image for pod: daemon-set-7thwm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Nov 16 09:22:19.002: INFO: Pod daemon-set-7thwm is not available Nov 16 09:22:19.007: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:22:20.009: INFO: Pod daemon-set-4cqf5 is not available Nov 16 09:22:20.015: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1200, will wait for the garbage collector to delete the pods Nov 16 09:22:20.081: INFO: Deleting DaemonSet.extensions daemon-set took: 7.178574ms Nov 16 09:22:20.481: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.163992ms Nov 16 09:22:25.794: INFO: Number of nodes with available pods: 0 Nov 16 09:22:25.794: INFO: Number of running nodes: 0, number of available pods: 0 Nov 16 09:22:25.801: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1200/daemonsets","resourceVersion":"9774865"},"items":null} Nov 16 09:22:25.804: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1200/pods","resourceVersion":"9774865"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:22:25.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1200" for this suite. • [SLOW TEST:31.525 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":78,"skipped":1496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:22:25.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 16 09:22:29.921: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:22:30.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-498" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":79,"skipped":1521,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:22:30.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-0e509836-c117-40a4-964f-7a79cc9add77 STEP: Creating a pod to test consume secrets Nov 16 09:22:30.115: INFO: Waiting up to 5m0s for pod "pod-secrets-41fb1cd7-8764-492e-824e-e2991c39d06f" in namespace "secrets-432" to be "Succeeded or Failed" Nov 16 09:22:30.165: INFO: Pod "pod-secrets-41fb1cd7-8764-492e-824e-e2991c39d06f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.823597ms Nov 16 09:22:32.169: INFO: Pod "pod-secrets-41fb1cd7-8764-492e-824e-e2991c39d06f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053979416s Nov 16 09:22:34.173: INFO: Pod "pod-secrets-41fb1cd7-8764-492e-824e-e2991c39d06f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058071213s STEP: Saw pod success Nov 16 09:22:34.173: INFO: Pod "pod-secrets-41fb1cd7-8764-492e-824e-e2991c39d06f" satisfied condition "Succeeded or Failed" Nov 16 09:22:34.176: INFO: Trying to get logs from node latest-worker pod pod-secrets-41fb1cd7-8764-492e-824e-e2991c39d06f container secret-volume-test: STEP: delete the pod Nov 16 09:22:34.192: INFO: Waiting for pod pod-secrets-41fb1cd7-8764-492e-824e-e2991c39d06f to disappear Nov 16 09:22:34.196: INFO: Pod pod-secrets-41fb1cd7-8764-492e-824e-e2991c39d06f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:22:34.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-432" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1533,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:22:34.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-873b6ab5-3404-43d9-ae3c-714c73e33b66 STEP: Creating a pod to test consume configMaps Nov 16 09:22:34.340: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc" in namespace "projected-3778" to be "Succeeded or Failed" Nov 16 09:22:34.344: INFO: Pod "pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.830886ms Nov 16 09:22:36.349: INFO: Pod "pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009299217s Nov 16 09:22:38.352: INFO: Pod "pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc": Phase="Running", Reason="", readiness=true. Elapsed: 4.012554742s Nov 16 09:22:40.357: INFO: Pod "pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017246661s STEP: Saw pod success Nov 16 09:22:40.357: INFO: Pod "pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc" satisfied condition "Succeeded or Failed" Nov 16 09:22:40.360: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc container projected-configmap-volume-test: STEP: delete the pod Nov 16 09:22:40.420: INFO: Waiting for pod pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc to disappear Nov 16 09:22:40.428: INFO: Pod pod-projected-configmaps-2cf80848-0cf7-4b13-91cd-389a6ca3cdcc no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:22:40.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3778" for this suite. • [SLOW TEST:6.232 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":81,"skipped":1533,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:22:40.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-bca825da-92a6-4e3e-8c61-3f224ea8cb06 in namespace container-probe-6258 Nov 16 09:22:44.488: INFO: Started pod test-webserver-bca825da-92a6-4e3e-8c61-3f224ea8cb06 in namespace container-probe-6258 STEP: checking the pod's current state and verifying that restartCount is present Nov 16 09:22:44.491: INFO: Initial restart count of pod test-webserver-bca825da-92a6-4e3e-8c61-3f224ea8cb06 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:26:45.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6258" for this suite. • [SLOW TEST:245.246 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":82,"skipped":1544,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:26:45.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Nov 16 09:26:52.086: INFO: &Pod{ObjectMeta:{send-events-846aa224-5547-438b-8f88-b8b3f6f0a1c0 events-5819 /api/v1/namespaces/events-5819/pods/send-events-846aa224-5547-438b-8f88-b8b3f6f0a1c0 15d85dc3-52ff-4e6a-b608-58c3c1bfeacf 9775758 0 2020-11-16 09:26:46 +0000 UTC map[name:foo time:29511140] map[] [] [] [{e2e.test Update v1 2020-11-16 09:26:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 09:26:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ksdt9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ksdt9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ksdt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 09:26:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 09:26:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 09:26:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 09:26:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.39,StartTime:2020-11-16 09:26:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 09:26:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://10fabaa12c30f6bc3081ec0729871f5f5225ab4d9b753a119f7086e9f7e68f96,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Nov 16 09:26:54.091: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Nov 16 09:26:56.096: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:26:56.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5819" for this suite. • [SLOW TEST:10.433 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":83,"skipped":1548,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:26:56.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:26:56.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56849c15-44f9-47fe-abe9-eee09b431904" in namespace "downward-api-7459" to be "Succeeded or Failed" Nov 16 09:26:56.224: INFO: Pod "downwardapi-volume-56849c15-44f9-47fe-abe9-eee09b431904": Phase="Pending", Reason="", readiness=false. Elapsed: 3.074493ms Nov 16 09:26:58.227: INFO: Pod "downwardapi-volume-56849c15-44f9-47fe-abe9-eee09b431904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006473362s Nov 16 09:27:00.241: INFO: Pod "downwardapi-volume-56849c15-44f9-47fe-abe9-eee09b431904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019927265s STEP: Saw pod success Nov 16 09:27:00.241: INFO: Pod "downwardapi-volume-56849c15-44f9-47fe-abe9-eee09b431904" satisfied condition "Succeeded or Failed" Nov 16 09:27:00.243: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-56849c15-44f9-47fe-abe9-eee09b431904 container client-container: STEP: delete the pod Nov 16 09:27:00.288: INFO: Waiting for pod downwardapi-volume-56849c15-44f9-47fe-abe9-eee09b431904 to disappear Nov 16 09:27:00.296: INFO: Pod downwardapi-volume-56849c15-44f9-47fe-abe9-eee09b431904 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:27:00.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7459" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":84,"skipped":1564,"failed":0} SS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:27:00.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-ce2c4d79-870d-4f84-b42f-275384a15e59 in namespace container-probe-7666 Nov 16 09:27:04.379: INFO: Started pod liveness-ce2c4d79-870d-4f84-b42f-275384a15e59 in namespace container-probe-7666 STEP: checking the pod's current state and verifying that restartCount is present Nov 16 09:27:04.382: INFO: Initial restart count of pod liveness-ce2c4d79-870d-4f84-b42f-275384a15e59 is 0 Nov 16 09:27:21.043: INFO: Restart count of pod container-probe-7666/liveness-ce2c4d79-870d-4f84-b42f-275384a15e59 is now 1 (16.661244139s elapsed) Nov 16 09:27:41.096: INFO: Restart count of pod container-probe-7666/liveness-ce2c4d79-870d-4f84-b42f-275384a15e59 is now 2 (36.713719974s elapsed) Nov 16 09:28:01.143: INFO: Restart count of pod container-probe-7666/liveness-ce2c4d79-870d-4f84-b42f-275384a15e59 is now 3 (56.761429122s elapsed) Nov 16 09:28:19.197: INFO: Restart count of pod container-probe-7666/liveness-ce2c4d79-870d-4f84-b42f-275384a15e59 is now 4 (1m14.814768848s elapsed) Nov 16 09:29:21.287: INFO: Restart count of pod container-probe-7666/liveness-ce2c4d79-870d-4f84-b42f-275384a15e59 is now 5 (2m16.905056123s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:29:21.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7666" for this suite. • [SLOW TEST:141.153 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":85,"skipped":1566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:29:21.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5317 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5317 STEP: creating replication controller externalsvc in namespace services-5317 I1116 09:29:22.169583 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5317, replica count: 2 I1116 09:29:25.220042 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:29:28.220297 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Nov 16 09:29:28.259: INFO: Creating new exec pod Nov 16 09:29:32.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-5317 execpodkskl2 -- /bin/sh -x -c nslookup clusterip-service.services-5317.svc.cluster.local' Nov 16 09:29:32.581: INFO: stderr: "I1116 09:29:32.464043 1382 log.go:181] (0xc00003a0b0) (0xc0007ee000) Create stream\nI1116 09:29:32.464110 1382 log.go:181] (0xc00003a0b0) (0xc0007ee000) Stream added, broadcasting: 1\nI1116 09:29:32.466656 1382 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1116 09:29:32.466694 1382 log.go:181] (0xc00003a0b0) (0xc0007ee0a0) Create stream\nI1116 09:29:32.466705 1382 log.go:181] (0xc00003a0b0) (0xc0007ee0a0) Stream added, broadcasting: 3\nI1116 09:29:32.467741 1382 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1116 09:29:32.467807 1382 log.go:181] (0xc00003a0b0) (0xc0007ee140) Create stream\nI1116 09:29:32.467828 1382 log.go:181] (0xc00003a0b0) (0xc0007ee140) Stream added, broadcasting: 5\nI1116 09:29:32.469008 1382 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1116 09:29:32.561031 1382 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1116 09:29:32.561070 1382 log.go:181] (0xc0007ee140) (5) Data frame handling\nI1116 09:29:32.561089 1382 log.go:181] (0xc0007ee140) (5) Data frame sent\n+ nslookup clusterip-service.services-5317.svc.cluster.local\nI1116 09:29:32.571168 1382 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1116 09:29:32.571189 1382 log.go:181] (0xc0007ee0a0) (3) Data frame handling\nI1116 09:29:32.571215 1382 log.go:181] (0xc0007ee0a0) (3) Data frame sent\nI1116 09:29:32.572026 1382 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1116 09:29:32.572065 1382 log.go:181] (0xc0007ee0a0) (3) Data frame handling\nI1116 09:29:32.572100 1382 log.go:181] (0xc0007ee0a0) (3) Data frame sent\nI1116 09:29:32.572502 1382 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1116 09:29:32.572531 1382 log.go:181] (0xc0007ee140) (5) Data frame handling\nI1116 09:29:32.572552 1382 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1116 09:29:32.572576 1382 log.go:181] (0xc0007ee0a0) (3) Data frame handling\nI1116 09:29:32.574302 1382 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1116 09:29:32.574336 1382 log.go:181] (0xc0007ee000) (1) Data frame handling\nI1116 09:29:32.574351 1382 log.go:181] (0xc0007ee000) (1) Data frame sent\nI1116 09:29:32.574366 1382 log.go:181] (0xc00003a0b0) (0xc0007ee000) Stream removed, broadcasting: 1\nI1116 09:29:32.574385 1382 log.go:181] (0xc00003a0b0) Go away received\nI1116 09:29:32.574755 1382 log.go:181] (0xc00003a0b0) (0xc0007ee000) Stream removed, broadcasting: 1\nI1116 09:29:32.574773 1382 log.go:181] (0xc00003a0b0) (0xc0007ee0a0) Stream removed, broadcasting: 3\nI1116 09:29:32.574785 1382 log.go:181] (0xc00003a0b0) (0xc0007ee140) Stream removed, broadcasting: 5\n" Nov 16 09:29:32.581: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5317.svc.cluster.local\tcanonical name = externalsvc.services-5317.svc.cluster.local.\nName:\texternalsvc.services-5317.svc.cluster.local\nAddress: 10.105.220.177\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5317, will wait for the garbage collector to delete the pods Nov 16 09:29:32.642: INFO: Deleting ReplicationController externalsvc took: 7.884022ms Nov 16 09:29:33.042: INFO: Terminating ReplicationController externalsvc pods took: 400.181365ms Nov 16 09:29:45.766: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:29:45.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5317" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:24.383 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":86,"skipped":1620,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:29:45.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4144 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4144 I1116 09:29:46.061176 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4144, replica count: 2 I1116 09:29:49.111593 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:29:52.111838 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:29:52.111: INFO: Creating new exec pod Nov 16 09:29:57.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4144 execpodwtph9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Nov 16 09:29:57.374: INFO: stderr: "I1116 09:29:57.264559 1400 log.go:181] (0xc000fb4fd0) (0xc000493720) Create stream\nI1116 09:29:57.264616 1400 log.go:181] (0xc000fb4fd0) (0xc000493720) Stream added, broadcasting: 1\nI1116 09:29:57.268324 1400 log.go:181] (0xc000fb4fd0) Reply frame received for 1\nI1116 09:29:57.268360 1400 log.go:181] (0xc000fb4fd0) (0xc000be40a0) Create stream\nI1116 09:29:57.268370 1400 log.go:181] (0xc000fb4fd0) (0xc000be40a0) Stream added, broadcasting: 3\nI1116 09:29:57.269480 1400 log.go:181] (0xc000fb4fd0) Reply frame received for 3\nI1116 09:29:57.269533 1400 log.go:181] (0xc000fb4fd0) (0xc00063a320) Create stream\nI1116 09:29:57.269548 1400 log.go:181] (0xc000fb4fd0) (0xc00063a320) Stream added, broadcasting: 5\nI1116 09:29:57.270548 1400 log.go:181] (0xc000fb4fd0) Reply frame received for 5\nI1116 09:29:57.367825 1400 log.go:181] (0xc000fb4fd0) Data frame received for 5\nI1116 09:29:57.367864 1400 log.go:181] (0xc00063a320) (5) Data frame handling\nI1116 09:29:57.367881 1400 log.go:181] (0xc00063a320) (5) Data frame sent\nI1116 09:29:57.367892 1400 log.go:181] (0xc000fb4fd0) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nI1116 09:29:57.367903 1400 log.go:181] (0xc00063a320) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1116 09:29:57.367917 1400 log.go:181] (0xc000fb4fd0) Data frame received for 3\nI1116 09:29:57.367931 1400 log.go:181] (0xc000be40a0) (3) Data frame handling\nI1116 09:29:57.367948 1400 log.go:181] (0xc00063a320) (5) Data frame sent\nI1116 09:29:57.367956 1400 log.go:181] (0xc000fb4fd0) Data frame received for 5\nI1116 09:29:57.367961 1400 log.go:181] (0xc00063a320) (5) Data frame handling\nI1116 09:29:57.369879 1400 log.go:181] (0xc000fb4fd0) Data frame received for 1\nI1116 09:29:57.369919 1400 log.go:181] (0xc000493720) (1) Data frame handling\nI1116 09:29:57.369941 1400 log.go:181] (0xc000493720) (1) Data frame sent\nI1116 09:29:57.369963 1400 log.go:181] (0xc000fb4fd0) (0xc000493720) Stream removed, broadcasting: 1\nI1116 09:29:57.369999 1400 log.go:181] (0xc000fb4fd0) Go away received\nI1116 09:29:57.370328 1400 log.go:181] (0xc000fb4fd0) (0xc000493720) Stream removed, broadcasting: 1\nI1116 09:29:57.370343 1400 log.go:181] (0xc000fb4fd0) (0xc000be40a0) Stream removed, broadcasting: 3\nI1116 09:29:57.370349 1400 log.go:181] (0xc000fb4fd0) (0xc00063a320) Stream removed, broadcasting: 5\n" Nov 16 09:29:57.374: INFO: stdout: "" Nov 16 09:29:57.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4144 execpodwtph9 -- /bin/sh -x -c nc -zv -t -w 2 10.107.58.54 80' Nov 16 09:29:57.581: INFO: stderr: "I1116 09:29:57.499508 1419 log.go:181] (0xc000d56f20) (0xc00059c1e0) Create stream\nI1116 09:29:57.499576 1419 log.go:181] (0xc000d56f20) (0xc00059c1e0) Stream added, broadcasting: 1\nI1116 09:29:57.504396 1419 log.go:181] (0xc000d56f20) Reply frame received for 1\nI1116 09:29:57.504425 1419 log.go:181] (0xc000d56f20) (0xc00059ce60) Create stream\nI1116 09:29:57.504434 1419 log.go:181] (0xc000d56f20) (0xc00059ce60) Stream added, broadcasting: 3\nI1116 09:29:57.505618 1419 log.go:181] (0xc000d56f20) Reply frame received for 3\nI1116 09:29:57.505646 1419 log.go:181] (0xc000d56f20) (0xc000ce8000) Create stream\nI1116 09:29:57.505656 1419 log.go:181] (0xc000d56f20) (0xc000ce8000) Stream added, broadcasting: 5\nI1116 09:29:57.506453 1419 log.go:181] (0xc000d56f20) Reply frame received for 5\nI1116 09:29:57.573750 1419 log.go:181] (0xc000d56f20) Data frame received for 3\nI1116 09:29:57.573790 1419 log.go:181] (0xc00059ce60) (3) Data frame handling\nI1116 09:29:57.574019 1419 log.go:181] (0xc000d56f20) Data frame received for 5\nI1116 09:29:57.574034 1419 log.go:181] (0xc000ce8000) (5) Data frame handling\nI1116 09:29:57.574047 1419 log.go:181] (0xc000ce8000) (5) Data frame sent\nI1116 09:29:57.574054 1419 log.go:181] (0xc000d56f20) Data frame received for 5\nI1116 09:29:57.574059 1419 log.go:181] (0xc000ce8000) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.58.54 80\nConnection to 10.107.58.54 80 port [tcp/http] succeeded!\nI1116 09:29:57.575159 1419 log.go:181] (0xc000d56f20) Data frame received for 1\nI1116 09:29:57.575175 1419 log.go:181] (0xc00059c1e0) (1) Data frame handling\nI1116 09:29:57.575325 1419 log.go:181] (0xc00059c1e0) (1) Data frame sent\nI1116 09:29:57.575348 1419 log.go:181] (0xc000d56f20) (0xc00059c1e0) Stream removed, broadcasting: 1\nI1116 09:29:57.575362 1419 log.go:181] (0xc000d56f20) Go away received\nI1116 09:29:57.575695 1419 log.go:181] (0xc000d56f20) (0xc00059c1e0) Stream removed, broadcasting: 1\nI1116 09:29:57.575712 1419 log.go:181] (0xc000d56f20) (0xc00059ce60) Stream removed, broadcasting: 3\nI1116 09:29:57.575721 1419 log.go:181] (0xc000d56f20) (0xc000ce8000) Stream removed, broadcasting: 5\n" Nov 16 09:29:57.581: INFO: stdout: "" Nov 16 09:29:57.581: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:29:57.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4144" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.775 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":87,"skipped":1623,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:29:57.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-46d1e4ed-d990-48a6-aeb0-705ac4ff6231 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:29:57.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9801" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":88,"skipped":1642,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:29:57.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:29:59.334: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-41193f70-cbd2-4281-a660-fc3e81f92ee4" in namespace "security-context-test-2721" to be "Succeeded or Failed" Nov 16 09:29:59.382: INFO: Pod "busybox-readonly-false-41193f70-cbd2-4281-a660-fc3e81f92ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 47.652811ms Nov 16 09:30:01.396: INFO: Pod "busybox-readonly-false-41193f70-cbd2-4281-a660-fc3e81f92ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061853999s Nov 16 09:30:03.482: INFO: Pod "busybox-readonly-false-41193f70-cbd2-4281-a660-fc3e81f92ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148073463s Nov 16 09:30:03.482: INFO: Pod "busybox-readonly-false-41193f70-cbd2-4281-a660-fc3e81f92ee4" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:30:03.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2721" for this suite. • [SLOW TEST:5.864 seconds] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":89,"skipped":1654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:30:03.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:30:08.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4263" for this suite. • [SLOW TEST:5.291 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":90,"skipped":1678,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:30:08.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:32:08.966: INFO: Deleting pod "var-expansion-dddc595a-3832-40e7-9eac-0f38e44f36ea" in namespace "var-expansion-9446" Nov 16 09:32:08.971: INFO: Wait up to 5m0s for pod "var-expansion-dddc595a-3832-40e7-9eac-0f38e44f36ea" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:32:13.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9446" for this suite. • [SLOW TEST:124.206 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":91,"skipped":1696,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:32:13.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-eebaff8c-0d7f-42ff-84ad-6f199a4c3d6c in namespace container-probe-3967 Nov 16 09:32:17.223: INFO: Started pod liveness-eebaff8c-0d7f-42ff-84ad-6f199a4c3d6c in namespace container-probe-3967 STEP: checking the pod's current state and verifying that restartCount is present Nov 16 09:32:17.226: INFO: Initial restart count of pod liveness-eebaff8c-0d7f-42ff-84ad-6f199a4c3d6c is 0 Nov 16 09:32:39.283: INFO: Restart count of pod container-probe-3967/liveness-eebaff8c-0d7f-42ff-84ad-6f199a4c3d6c is now 1 (22.057174786s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:32:39.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3967" for this suite. • [SLOW TEST:26.255 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":92,"skipped":1699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:32:39.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 16 09:32:39.424: INFO: Waiting up to 5m0s for pod "pod-e313a43e-0352-4bca-8cc1-cb8956509763" in namespace "emptydir-2393" to be "Succeeded or Failed" Nov 16 09:32:39.447: INFO: Pod "pod-e313a43e-0352-4bca-8cc1-cb8956509763": Phase="Pending", Reason="", readiness=false. Elapsed: 23.031837ms Nov 16 09:32:41.452: INFO: Pod "pod-e313a43e-0352-4bca-8cc1-cb8956509763": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027815349s Nov 16 09:32:43.456: INFO: Pod "pod-e313a43e-0352-4bca-8cc1-cb8956509763": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032578145s STEP: Saw pod success Nov 16 09:32:43.457: INFO: Pod "pod-e313a43e-0352-4bca-8cc1-cb8956509763" satisfied condition "Succeeded or Failed" Nov 16 09:32:43.460: INFO: Trying to get logs from node latest-worker pod pod-e313a43e-0352-4bca-8cc1-cb8956509763 container test-container: STEP: delete the pod Nov 16 09:32:43.495: INFO: Waiting for pod pod-e313a43e-0352-4bca-8cc1-cb8956509763 to disappear Nov 16 09:32:43.531: INFO: Pod pod-e313a43e-0352-4bca-8cc1-cb8956509763 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:32:43.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2393" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1729,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:32:43.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:32:43.595: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 16 09:32:46.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5052 create -f -' Nov 16 09:32:51.393: INFO: stderr: "" Nov 16 09:32:51.393: INFO: stdout: "e2e-test-crd-publish-openapi-466-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Nov 16 09:32:51.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5052 delete e2e-test-crd-publish-openapi-466-crds test-cr' Nov 16 09:32:51.514: INFO: stderr: "" Nov 16 09:32:51.514: INFO: stdout: "e2e-test-crd-publish-openapi-466-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Nov 16 09:32:51.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5052 apply -f -' Nov 16 09:32:51.798: INFO: stderr: "" Nov 16 09:32:51.798: INFO: stdout: "e2e-test-crd-publish-openapi-466-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Nov 16 09:32:51.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5052 delete e2e-test-crd-publish-openapi-466-crds test-cr' Nov 16 09:32:51.900: INFO: stderr: "" Nov 16 09:32:51.900: INFO: stdout: "e2e-test-crd-publish-openapi-466-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Nov 16 09:32:51.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-466-crds' Nov 16 09:32:52.211: INFO: stderr: "" Nov 16 09:32:52.211: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-466-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:32:55.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5052" for this suite. • [SLOW TEST:11.623 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":94,"skipped":1743,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:32:55.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Nov 16 09:32:55.283: INFO: Waiting up to 5m0s for pod "var-expansion-497981ff-39bd-4a41-a7cf-12db9fef7cf6" in namespace "var-expansion-3903" to be "Succeeded or Failed" Nov 16 09:32:55.291: INFO: Pod "var-expansion-497981ff-39bd-4a41-a7cf-12db9fef7cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.546992ms Nov 16 09:32:57.295: INFO: Pod "var-expansion-497981ff-39bd-4a41-a7cf-12db9fef7cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01209182s Nov 16 09:32:59.316: INFO: Pod "var-expansion-497981ff-39bd-4a41-a7cf-12db9fef7cf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032646602s STEP: Saw pod success Nov 16 09:32:59.316: INFO: Pod "var-expansion-497981ff-39bd-4a41-a7cf-12db9fef7cf6" satisfied condition "Succeeded or Failed" Nov 16 09:32:59.319: INFO: Trying to get logs from node latest-worker pod var-expansion-497981ff-39bd-4a41-a7cf-12db9fef7cf6 container dapi-container: STEP: delete the pod Nov 16 09:32:59.357: INFO: Waiting for pod var-expansion-497981ff-39bd-4a41-a7cf-12db9fef7cf6 to disappear Nov 16 09:32:59.369: INFO: Pod var-expansion-497981ff-39bd-4a41-a7cf-12db9fef7cf6 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:32:59.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3903" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":95,"skipped":1759,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:32:59.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4770 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4770 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4770 Nov 16 09:32:59.519: INFO: Found 0 stateful pods, waiting for 1 Nov 16 09:33:09.523: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Nov 16 09:33:09.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4770 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:33:09.820: INFO: stderr: "I1116 09:33:09.684102 1527 log.go:181] (0xc0006e7290) (0xc0006de820) Create stream\nI1116 09:33:09.684199 1527 log.go:181] (0xc0006e7290) (0xc0006de820) Stream added, broadcasting: 1\nI1116 09:33:09.691026 1527 log.go:181] (0xc0006e7290) Reply frame received for 1\nI1116 09:33:09.691073 1527 log.go:181] (0xc0006e7290) (0xc0006de000) Create stream\nI1116 09:33:09.691088 1527 log.go:181] (0xc0006e7290) (0xc0006de000) Stream added, broadcasting: 3\nI1116 09:33:09.692118 1527 log.go:181] (0xc0006e7290) Reply frame received for 3\nI1116 09:33:09.692148 1527 log.go:181] (0xc0006e7290) (0xc00061a0a0) Create stream\nI1116 09:33:09.692158 1527 log.go:181] (0xc0006e7290) (0xc00061a0a0) Stream added, broadcasting: 5\nI1116 09:33:09.693127 1527 log.go:181] (0xc0006e7290) Reply frame received for 5\nI1116 09:33:09.779708 1527 log.go:181] (0xc0006e7290) Data frame received for 5\nI1116 09:33:09.779738 1527 log.go:181] (0xc00061a0a0) (5) Data frame handling\nI1116 09:33:09.779761 1527 log.go:181] (0xc00061a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:33:09.807009 1527 log.go:181] (0xc0006e7290) Data frame received for 3\nI1116 09:33:09.807063 1527 log.go:181] (0xc0006de000) (3) Data frame handling\nI1116 09:33:09.807114 1527 log.go:181] (0xc0006de000) (3) Data frame sent\nI1116 09:33:09.807340 1527 log.go:181] (0xc0006e7290) Data frame received for 5\nI1116 09:33:09.807362 1527 log.go:181] (0xc00061a0a0) (5) Data frame handling\nI1116 09:33:09.807402 1527 log.go:181] (0xc0006e7290) Data frame received for 3\nI1116 09:33:09.807436 1527 log.go:181] (0xc0006de000) (3) Data frame handling\nI1116 09:33:09.810288 1527 log.go:181] (0xc0006e7290) Data frame received for 1\nI1116 09:33:09.810333 1527 log.go:181] (0xc0006de820) (1) Data frame handling\nI1116 09:33:09.810363 1527 log.go:181] (0xc0006de820) (1) Data frame sent\nI1116 09:33:09.810389 1527 log.go:181] (0xc0006e7290) (0xc0006de820) Stream removed, broadcasting: 1\nI1116 09:33:09.810436 1527 log.go:181] (0xc0006e7290) Go away received\nI1116 09:33:09.811015 1527 log.go:181] (0xc0006e7290) (0xc0006de820) Stream removed, broadcasting: 1\nI1116 09:33:09.811051 1527 log.go:181] (0xc0006e7290) (0xc0006de000) Stream removed, broadcasting: 3\nI1116 09:33:09.811080 1527 log.go:181] (0xc0006e7290) (0xc00061a0a0) Stream removed, broadcasting: 5\n" Nov 16 09:33:09.820: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:33:09.820: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:33:09.823: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 16 09:33:19.828: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 16 09:33:19.828: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 09:33:19.849: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999552s Nov 16 09:33:20.854: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.9884373s Nov 16 09:33:21.859: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.983839s Nov 16 09:33:22.864: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.978754224s Nov 16 09:33:23.868: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.973974808s Nov 16 09:33:24.873: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.969668061s Nov 16 09:33:25.877: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.964915291s Nov 16 09:33:26.882: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.961020728s Nov 16 09:33:27.887: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.956233458s Nov 16 09:33:28.891: INFO: Verifying statefulset ss doesn't scale past 1 for another 951.150238ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4770 Nov 16 09:33:29.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4770 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:33:30.101: INFO: stderr: "I1116 09:33:30.032053 1545 log.go:181] (0xc000ccb080) (0xc000d288c0) Create stream\nI1116 09:33:30.032128 1545 log.go:181] (0xc000ccb080) (0xc000d288c0) Stream added, broadcasting: 1\nI1116 09:33:30.039000 1545 log.go:181] (0xc000ccb080) Reply frame received for 1\nI1116 09:33:30.039031 1545 log.go:181] (0xc000ccb080) (0xc000d28000) Create stream\nI1116 09:33:30.039039 1545 log.go:181] (0xc000ccb080) (0xc000d28000) Stream added, broadcasting: 3\nI1116 09:33:30.039772 1545 log.go:181] (0xc000ccb080) Reply frame received for 3\nI1116 09:33:30.039820 1545 log.go:181] (0xc000ccb080) (0xc000633cc0) Create stream\nI1116 09:33:30.039834 1545 log.go:181] (0xc000ccb080) (0xc000633cc0) Stream added, broadcasting: 5\nI1116 09:33:30.040509 1545 log.go:181] (0xc000ccb080) Reply frame received for 5\nI1116 09:33:30.094139 1545 log.go:181] (0xc000ccb080) Data frame received for 5\nI1116 09:33:30.094164 1545 log.go:181] (0xc000633cc0) (5) Data frame handling\nI1116 09:33:30.094172 1545 log.go:181] (0xc000633cc0) (5) Data frame sent\nI1116 09:33:30.094178 1545 log.go:181] (0xc000ccb080) Data frame received for 5\nI1116 09:33:30.094182 1545 log.go:181] (0xc000633cc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1116 09:33:30.094216 1545 log.go:181] (0xc000ccb080) Data frame received for 3\nI1116 09:33:30.094244 1545 log.go:181] (0xc000d28000) (3) Data frame handling\nI1116 09:33:30.094257 1545 log.go:181] (0xc000d28000) (3) Data frame sent\nI1116 09:33:30.094267 1545 log.go:181] (0xc000ccb080) Data frame received for 3\nI1116 09:33:30.094275 1545 log.go:181] (0xc000d28000) (3) Data frame handling\nI1116 09:33:30.095741 1545 log.go:181] (0xc000ccb080) Data frame received for 1\nI1116 09:33:30.095761 1545 log.go:181] (0xc000d288c0) (1) Data frame handling\nI1116 09:33:30.095779 1545 log.go:181] (0xc000d288c0) (1) Data frame sent\nI1116 09:33:30.095793 1545 log.go:181] (0xc000ccb080) (0xc000d288c0) Stream removed, broadcasting: 1\nI1116 09:33:30.095852 1545 log.go:181] (0xc000ccb080) Go away received\nI1116 09:33:30.096105 1545 log.go:181] (0xc000ccb080) (0xc000d288c0) Stream removed, broadcasting: 1\nI1116 09:33:30.096117 1545 log.go:181] (0xc000ccb080) (0xc000d28000) Stream removed, broadcasting: 3\nI1116 09:33:30.096122 1545 log.go:181] (0xc000ccb080) (0xc000633cc0) Stream removed, broadcasting: 5\n" Nov 16 09:33:30.101: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:33:30.101: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:33:30.105: INFO: Found 1 stateful pods, waiting for 3 Nov 16 09:33:40.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:33:40.119: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:33:40.119: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Nov 16 09:33:40.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4770 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:33:40.347: INFO: stderr: "I1116 09:33:40.268335 1563 log.go:181] (0xc000a41080) (0xc00051a460) Create stream\nI1116 09:33:40.268393 1563 log.go:181] (0xc000a41080) (0xc00051a460) Stream added, broadcasting: 1\nI1116 09:33:40.273122 1563 log.go:181] (0xc000a41080) Reply frame received for 1\nI1116 09:33:40.273165 1563 log.go:181] (0xc000a41080) (0xc00051ac80) Create stream\nI1116 09:33:40.273177 1563 log.go:181] (0xc000a41080) (0xc00051ac80) Stream added, broadcasting: 3\nI1116 09:33:40.273920 1563 log.go:181] (0xc000a41080) Reply frame received for 3\nI1116 09:33:40.273943 1563 log.go:181] (0xc000a41080) (0xc00051b400) Create stream\nI1116 09:33:40.273951 1563 log.go:181] (0xc000a41080) (0xc00051b400) Stream added, broadcasting: 5\nI1116 09:33:40.274736 1563 log.go:181] (0xc000a41080) Reply frame received for 5\nI1116 09:33:40.337893 1563 log.go:181] (0xc000a41080) Data frame received for 3\nI1116 09:33:40.337922 1563 log.go:181] (0xc00051ac80) (3) Data frame handling\nI1116 09:33:40.337930 1563 log.go:181] (0xc00051ac80) (3) Data frame sent\nI1116 09:33:40.337936 1563 log.go:181] (0xc000a41080) Data frame received for 3\nI1116 09:33:40.337963 1563 log.go:181] (0xc000a41080) Data frame received for 5\nI1116 09:33:40.338007 1563 log.go:181] (0xc00051b400) (5) Data frame handling\nI1116 09:33:40.338024 1563 log.go:181] (0xc00051b400) (5) Data frame sent\nI1116 09:33:40.338036 1563 log.go:181] (0xc000a41080) Data frame received for 5\nI1116 09:33:40.338046 1563 log.go:181] (0xc00051b400) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:33:40.338086 1563 log.go:181] (0xc00051ac80) (3) Data frame handling\nI1116 09:33:40.339415 1563 log.go:181] (0xc000a41080) Data frame received for 1\nI1116 09:33:40.339437 1563 log.go:181] (0xc00051a460) (1) Data frame handling\nI1116 09:33:40.339448 1563 log.go:181] (0xc00051a460) (1) Data frame sent\nI1116 09:33:40.339457 1563 log.go:181] (0xc000a41080) (0xc00051a460) Stream removed, broadcasting: 1\nI1116 09:33:40.339468 1563 log.go:181] (0xc000a41080) Go away received\nI1116 09:33:40.339794 1563 log.go:181] (0xc000a41080) (0xc00051a460) Stream removed, broadcasting: 1\nI1116 09:33:40.339815 1563 log.go:181] (0xc000a41080) (0xc00051ac80) Stream removed, broadcasting: 3\nI1116 09:33:40.339821 1563 log.go:181] (0xc000a41080) (0xc00051b400) Stream removed, broadcasting: 5\n" Nov 16 09:33:40.347: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:33:40.347: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:33:40.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4770 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:33:40.633: INFO: stderr: "I1116 09:33:40.493078 1581 log.go:181] (0xc000f191e0) (0xc00050ad20) Create stream\nI1116 09:33:40.493133 1581 log.go:181] (0xc000f191e0) (0xc00050ad20) Stream added, broadcasting: 1\nI1116 09:33:40.501321 1581 log.go:181] (0xc000f191e0) Reply frame received for 1\nI1116 09:33:40.501395 1581 log.go:181] (0xc000f191e0) (0xc00050b7c0) Create stream\nI1116 09:33:40.501416 1581 log.go:181] (0xc000f191e0) (0xc00050b7c0) Stream added, broadcasting: 3\nI1116 09:33:40.502501 1581 log.go:181] (0xc000f191e0) Reply frame received for 3\nI1116 09:33:40.502543 1581 log.go:181] (0xc000f191e0) (0xc000c22280) Create stream\nI1116 09:33:40.502559 1581 log.go:181] (0xc000f191e0) (0xc000c22280) Stream added, broadcasting: 5\nI1116 09:33:40.503634 1581 log.go:181] (0xc000f191e0) Reply frame received for 5\nI1116 09:33:40.575949 1581 log.go:181] (0xc000f191e0) Data frame received for 5\nI1116 09:33:40.575981 1581 log.go:181] (0xc000c22280) (5) Data frame handling\nI1116 09:33:40.575997 1581 log.go:181] (0xc000c22280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:33:40.628148 1581 log.go:181] (0xc000f191e0) Data frame received for 5\nI1116 09:33:40.628195 1581 log.go:181] (0xc000c22280) (5) Data frame handling\nI1116 09:33:40.628217 1581 log.go:181] (0xc000f191e0) Data frame received for 3\nI1116 09:33:40.628226 1581 log.go:181] (0xc00050b7c0) (3) Data frame handling\nI1116 09:33:40.628236 1581 log.go:181] (0xc00050b7c0) (3) Data frame sent\nI1116 09:33:40.628243 1581 log.go:181] (0xc000f191e0) Data frame received for 3\nI1116 09:33:40.628250 1581 log.go:181] (0xc00050b7c0) (3) Data frame handling\nI1116 09:33:40.628273 1581 log.go:181] (0xc000f191e0) Data frame received for 1\nI1116 09:33:40.628291 1581 log.go:181] (0xc00050ad20) (1) Data frame handling\nI1116 09:33:40.628304 1581 log.go:181] (0xc00050ad20) (1) Data frame sent\nI1116 09:33:40.628317 1581 log.go:181] (0xc000f191e0) (0xc00050ad20) Stream removed, broadcasting: 1\nI1116 09:33:40.628335 1581 log.go:181] (0xc000f191e0) Go away received\nI1116 09:33:40.628696 1581 log.go:181] (0xc000f191e0) (0xc00050ad20) Stream removed, broadcasting: 1\nI1116 09:33:40.628712 1581 log.go:181] (0xc000f191e0) (0xc00050b7c0) Stream removed, broadcasting: 3\nI1116 09:33:40.628721 1581 log.go:181] (0xc000f191e0) (0xc000c22280) Stream removed, broadcasting: 5\n" Nov 16 09:33:40.633: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:33:40.633: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:33:40.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4770 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:33:40.884: INFO: stderr: "I1116 09:33:40.777381 1599 log.go:181] (0xc000a9f290) (0xc0009b8640) Create stream\nI1116 09:33:40.777435 1599 log.go:181] (0xc000a9f290) (0xc0009b8640) Stream added, broadcasting: 1\nI1116 09:33:40.780794 1599 log.go:181] (0xc000a9f290) Reply frame received for 1\nI1116 09:33:40.780921 1599 log.go:181] (0xc000a9f290) (0xc000c940a0) Create stream\nI1116 09:33:40.780937 1599 log.go:181] (0xc000a9f290) (0xc000c940a0) Stream added, broadcasting: 3\nI1116 09:33:40.781839 1599 log.go:181] (0xc000a9f290) Reply frame received for 3\nI1116 09:33:40.781898 1599 log.go:181] (0xc000a9f290) (0xc00083c000) Create stream\nI1116 09:33:40.781927 1599 log.go:181] (0xc000a9f290) (0xc00083c000) Stream added, broadcasting: 5\nI1116 09:33:40.782743 1599 log.go:181] (0xc000a9f290) Reply frame received for 5\nI1116 09:33:40.845440 1599 log.go:181] (0xc000a9f290) Data frame received for 5\nI1116 09:33:40.845472 1599 log.go:181] (0xc00083c000) (5) Data frame handling\nI1116 09:33:40.845494 1599 log.go:181] (0xc00083c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:33:40.875784 1599 log.go:181] (0xc000a9f290) Data frame received for 5\nI1116 09:33:40.875830 1599 log.go:181] (0xc00083c000) (5) Data frame handling\nI1116 09:33:40.875855 1599 log.go:181] (0xc000a9f290) Data frame received for 3\nI1116 09:33:40.875865 1599 log.go:181] (0xc000c940a0) (3) Data frame handling\nI1116 09:33:40.875883 1599 log.go:181] (0xc000c940a0) (3) Data frame sent\nI1116 09:33:40.876091 1599 log.go:181] (0xc000a9f290) Data frame received for 3\nI1116 09:33:40.876124 1599 log.go:181] (0xc000c940a0) (3) Data frame handling\nI1116 09:33:40.878373 1599 log.go:181] (0xc000a9f290) Data frame received for 1\nI1116 09:33:40.878407 1599 log.go:181] (0xc0009b8640) (1) Data frame handling\nI1116 09:33:40.878428 1599 log.go:181] (0xc0009b8640) (1) Data frame sent\nI1116 09:33:40.878449 1599 log.go:181] (0xc000a9f290) (0xc0009b8640) Stream removed, broadcasting: 1\nI1116 09:33:40.878482 1599 log.go:181] (0xc000a9f290) Go away received\nI1116 09:33:40.878753 1599 log.go:181] (0xc000a9f290) (0xc0009b8640) Stream removed, broadcasting: 1\nI1116 09:33:40.878766 1599 log.go:181] (0xc000a9f290) (0xc000c940a0) Stream removed, broadcasting: 3\nI1116 09:33:40.878773 1599 log.go:181] (0xc000a9f290) (0xc00083c000) Stream removed, broadcasting: 5\n" Nov 16 09:33:40.884: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:33:40.884: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:33:40.884: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 09:33:40.888: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 16 09:33:50.897: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 16 09:33:50.897: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 16 09:33:50.897: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 16 09:33:50.929: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999513s Nov 16 09:33:51.935: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976231555s Nov 16 09:33:52.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970137265s Nov 16 09:33:53.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964010323s Nov 16 09:33:54.952: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959445487s Nov 16 09:33:55.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.953465302s Nov 16 09:33:56.965: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.946970128s Nov 16 09:33:57.970: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.940273519s Nov 16 09:33:58.976: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.934841147s Nov 16 09:33:59.982: INFO: Verifying statefulset ss doesn't scale past 3 for another 928.777531ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4770 Nov 16 09:34:00.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4770 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:34:01.242: INFO: stderr: "I1116 09:34:01.137107 1617 log.go:181] (0xc000e95080) (0xc0006157c0) Create stream\nI1116 09:34:01.137162 1617 log.go:181] (0xc000e95080) (0xc0006157c0) Stream added, broadcasting: 1\nI1116 09:34:01.140791 1617 log.go:181] (0xc000e95080) Reply frame received for 1\nI1116 09:34:01.140828 1617 log.go:181] (0xc000e95080) (0xc000baa0a0) Create stream\nI1116 09:34:01.140904 1617 log.go:181] (0xc000e95080) (0xc000baa0a0) Stream added, broadcasting: 3\nI1116 09:34:01.141523 1617 log.go:181] (0xc000e95080) Reply frame received for 3\nI1116 09:34:01.141549 1617 log.go:181] (0xc000e95080) (0xc000b145a0) Create stream\nI1116 09:34:01.141565 1617 log.go:181] (0xc000e95080) (0xc000b145a0) Stream added, broadcasting: 5\nI1116 09:34:01.142254 1617 log.go:181] (0xc000e95080) Reply frame received for 5\nI1116 09:34:01.234288 1617 log.go:181] (0xc000e95080) Data frame received for 5\nI1116 09:34:01.234340 1617 log.go:181] (0xc000b145a0) (5) Data frame handling\nI1116 09:34:01.234356 1617 log.go:181] (0xc000b145a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1116 09:34:01.234370 1617 log.go:181] (0xc000e95080) Data frame received for 5\nI1116 09:34:01.234429 1617 log.go:181] (0xc000b145a0) (5) Data frame handling\nI1116 09:34:01.234510 1617 log.go:181] (0xc000e95080) Data frame received for 3\nI1116 09:34:01.234534 1617 log.go:181] (0xc000baa0a0) (3) Data frame handling\nI1116 09:34:01.234554 1617 log.go:181] (0xc000baa0a0) (3) Data frame sent\nI1116 09:34:01.234569 1617 log.go:181] (0xc000e95080) Data frame received for 3\nI1116 09:34:01.234580 1617 log.go:181] (0xc000baa0a0) (3) Data frame handling\nI1116 09:34:01.235855 1617 log.go:181] (0xc000e95080) Data frame received for 1\nI1116 09:34:01.235880 1617 log.go:181] (0xc0006157c0) (1) Data frame handling\nI1116 09:34:01.235896 1617 log.go:181] (0xc0006157c0) (1) Data frame sent\nI1116 09:34:01.235915 1617 log.go:181] (0xc000e95080) (0xc0006157c0) Stream removed, broadcasting: 1\nI1116 09:34:01.235941 1617 log.go:181] (0xc000e95080) Go away received\nI1116 09:34:01.236263 1617 log.go:181] (0xc000e95080) (0xc0006157c0) Stream removed, broadcasting: 1\nI1116 09:34:01.236278 1617 log.go:181] (0xc000e95080) (0xc000baa0a0) Stream removed, broadcasting: 3\nI1116 09:34:01.236285 1617 log.go:181] (0xc000e95080) (0xc000b145a0) Stream removed, broadcasting: 5\n" Nov 16 09:34:01.242: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:34:01.242: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:34:01.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4770 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:34:01.464: INFO: stderr: "I1116 09:34:01.379061 1636 log.go:181] (0xc000216e70) (0xc0003cad20) Create stream\nI1116 09:34:01.379122 1636 log.go:181] (0xc000216e70) (0xc0003cad20) Stream added, broadcasting: 1\nI1116 09:34:01.384167 1636 log.go:181] (0xc000216e70) Reply frame received for 1\nI1116 09:34:01.384212 1636 log.go:181] (0xc000216e70) (0xc000d4a0a0) Create stream\nI1116 09:34:01.384228 1636 log.go:181] (0xc000216e70) (0xc000d4a0a0) Stream added, broadcasting: 3\nI1116 09:34:01.385332 1636 log.go:181] (0xc000216e70) Reply frame received for 3\nI1116 09:34:01.385377 1636 log.go:181] (0xc000216e70) (0xc000d4a140) Create stream\nI1116 09:34:01.385391 1636 log.go:181] (0xc000216e70) (0xc000d4a140) Stream added, broadcasting: 5\nI1116 09:34:01.386213 1636 log.go:181] (0xc000216e70) Reply frame received for 5\nI1116 09:34:01.454582 1636 log.go:181] (0xc000216e70) Data frame received for 5\nI1116 09:34:01.454623 1636 log.go:181] (0xc000d4a140) (5) Data frame handling\nI1116 09:34:01.454637 1636 log.go:181] (0xc000d4a140) (5) Data frame sent\nI1116 09:34:01.454652 1636 log.go:181] (0xc000216e70) Data frame received for 5\nI1116 09:34:01.454667 1636 log.go:181] (0xc000d4a140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1116 09:34:01.454714 1636 log.go:181] (0xc000216e70) Data frame received for 3\nI1116 09:34:01.454757 1636 log.go:181] (0xc000d4a0a0) (3) Data frame handling\nI1116 09:34:01.454776 1636 log.go:181] (0xc000d4a0a0) (3) Data frame sent\nI1116 09:34:01.454791 1636 log.go:181] (0xc000216e70) Data frame received for 3\nI1116 09:34:01.454801 1636 log.go:181] (0xc000d4a0a0) (3) Data frame handling\nI1116 09:34:01.455557 1636 log.go:181] (0xc000216e70) Data frame received for 1\nI1116 09:34:01.455582 1636 log.go:181] (0xc0003cad20) (1) Data frame handling\nI1116 09:34:01.455594 1636 log.go:181] (0xc0003cad20) (1) Data frame sent\nI1116 09:34:01.455614 1636 log.go:181] (0xc000216e70) (0xc0003cad20) Stream removed, broadcasting: 1\nI1116 09:34:01.455643 1636 log.go:181] (0xc000216e70) Go away received\nI1116 09:34:01.455996 1636 log.go:181] (0xc000216e70) (0xc0003cad20) Stream removed, broadcasting: 1\nI1116 09:34:01.456024 1636 log.go:181] (0xc000216e70) (0xc000d4a0a0) Stream removed, broadcasting: 3\nI1116 09:34:01.456035 1636 log.go:181] (0xc000216e70) (0xc000d4a140) Stream removed, broadcasting: 5\n" Nov 16 09:34:01.464: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:34:01.464: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:34:01.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4770 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:34:01.687: INFO: stderr: "I1116 09:34:01.620339 1654 log.go:181] (0xc00012cf20) (0xc0003cd720) Create stream\nI1116 09:34:01.620385 1654 log.go:181] (0xc00012cf20) (0xc0003cd720) Stream added, broadcasting: 1\nI1116 09:34:01.623597 1654 log.go:181] (0xc00012cf20) Reply frame received for 1\nI1116 09:34:01.623631 1654 log.go:181] (0xc00012cf20) (0xc000f28000) Create stream\nI1116 09:34:01.623645 1654 log.go:181] (0xc00012cf20) (0xc000f28000) Stream added, broadcasting: 3\nI1116 09:34:01.624267 1654 log.go:181] (0xc00012cf20) Reply frame received for 3\nI1116 09:34:01.624297 1654 log.go:181] (0xc00012cf20) (0xc000846d20) Create stream\nI1116 09:34:01.624308 1654 log.go:181] (0xc00012cf20) (0xc000846d20) Stream added, broadcasting: 5\nI1116 09:34:01.625038 1654 log.go:181] (0xc00012cf20) Reply frame received for 5\nI1116 09:34:01.681032 1654 log.go:181] (0xc00012cf20) Data frame received for 5\nI1116 09:34:01.681060 1654 log.go:181] (0xc000846d20) (5) Data frame handling\nI1116 09:34:01.681067 1654 log.go:181] (0xc000846d20) (5) Data frame sent\nI1116 09:34:01.681072 1654 log.go:181] (0xc00012cf20) Data frame received for 5\nI1116 09:34:01.681076 1654 log.go:181] (0xc000846d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1116 09:34:01.681091 1654 log.go:181] (0xc00012cf20) Data frame received for 3\nI1116 09:34:01.681095 1654 log.go:181] (0xc000f28000) (3) Data frame handling\nI1116 09:34:01.681100 1654 log.go:181] (0xc000f28000) (3) Data frame sent\nI1116 09:34:01.681105 1654 log.go:181] (0xc00012cf20) Data frame received for 3\nI1116 09:34:01.681109 1654 log.go:181] (0xc000f28000) (3) Data frame handling\nI1116 09:34:01.682075 1654 log.go:181] (0xc00012cf20) Data frame received for 1\nI1116 09:34:01.682101 1654 log.go:181] (0xc0003cd720) (1) Data frame handling\nI1116 09:34:01.682111 1654 log.go:181] (0xc0003cd720) (1) Data frame sent\nI1116 09:34:01.682125 1654 log.go:181] (0xc00012cf20) (0xc0003cd720) Stream removed, broadcasting: 1\nI1116 09:34:01.682139 1654 log.go:181] (0xc00012cf20) Go away received\nI1116 09:34:01.682519 1654 log.go:181] (0xc00012cf20) (0xc0003cd720) Stream removed, broadcasting: 1\nI1116 09:34:01.682534 1654 log.go:181] (0xc00012cf20) (0xc000f28000) Stream removed, broadcasting: 3\nI1116 09:34:01.682541 1654 log.go:181] (0xc00012cf20) (0xc000846d20) Stream removed, broadcasting: 5\n" Nov 16 09:34:01.688: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:34:01.688: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:34:01.688: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 16 09:34:41.706: INFO: Deleting all statefulset in ns statefulset-4770 Nov 16 09:34:41.720: INFO: Scaling statefulset ss to 0 Nov 16 09:34:41.730: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 09:34:41.732: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:34:41.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4770" for this suite. • [SLOW TEST:102.374 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":96,"skipped":1759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:34:41.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Nov 16 09:34:41.890: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:34:41.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3414" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":97,"skipped":1796,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:34:41.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 16 09:34:42.034: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:42.059: INFO: Number of nodes with available pods: 0 Nov 16 09:34:42.059: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:43.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:43.069: INFO: Number of nodes with available pods: 0 Nov 16 09:34:43.069: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:44.271: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:44.274: INFO: Number of nodes with available pods: 0 Nov 16 09:34:44.274: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:45.312: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:45.315: INFO: Number of nodes with available pods: 0 Nov 16 09:34:45.315: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:46.065: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:46.069: INFO: Number of nodes with available pods: 1 Nov 16 09:34:46.069: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:47.066: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:47.072: INFO: Number of nodes with available pods: 2 Nov 16 09:34:47.072: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Nov 16 09:34:47.163: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:47.232: INFO: Number of nodes with available pods: 1 Nov 16 09:34:47.232: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:48.648: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:48.685: INFO: Number of nodes with available pods: 1 Nov 16 09:34:48.685: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:49.253: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:49.256: INFO: Number of nodes with available pods: 1 Nov 16 09:34:49.257: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:50.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:50.242: INFO: Number of nodes with available pods: 1 Nov 16 09:34:50.242: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:51.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:51.242: INFO: Number of nodes with available pods: 1 Nov 16 09:34:51.242: INFO: Node latest-worker is running more than one daemon pod Nov 16 09:34:52.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 09:34:52.244: INFO: Number of nodes with available pods: 2 Nov 16 09:34:52.244: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9585, will wait for the garbage collector to delete the pods Nov 16 09:34:52.309: INFO: Deleting DaemonSet.extensions daemon-set took: 6.727992ms Nov 16 09:34:52.809: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.270482ms Nov 16 09:35:05.812: INFO: Number of nodes with available pods: 0 Nov 16 09:35:05.812: INFO: Number of running nodes: 0, number of available pods: 0 Nov 16 09:35:05.815: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9585/daemonsets","resourceVersion":"9778022"},"items":null} Nov 16 09:35:05.817: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9585/pods","resourceVersion":"9778022"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:35:05.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9585" for this suite. • [SLOW TEST:23.888 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":98,"skipped":1797,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:35:05.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:35:10.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1943" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:35:10.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:35:10.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccbe6bf2-c4ea-46e8-9351-a2c9bb672fc0" in namespace "projected-7267" to be "Succeeded or Failed" Nov 16 09:35:10.128: INFO: Pod "downwardapi-volume-ccbe6bf2-c4ea-46e8-9351-a2c9bb672fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.056796ms Nov 16 09:35:12.156: INFO: Pod "downwardapi-volume-ccbe6bf2-c4ea-46e8-9351-a2c9bb672fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037214587s Nov 16 09:35:14.161: INFO: Pod "downwardapi-volume-ccbe6bf2-c4ea-46e8-9351-a2c9bb672fc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041645149s STEP: Saw pod success Nov 16 09:35:14.161: INFO: Pod "downwardapi-volume-ccbe6bf2-c4ea-46e8-9351-a2c9bb672fc0" satisfied condition "Succeeded or Failed" Nov 16 09:35:14.164: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ccbe6bf2-c4ea-46e8-9351-a2c9bb672fc0 container client-container: STEP: delete the pod Nov 16 09:35:14.213: INFO: Waiting for pod downwardapi-volume-ccbe6bf2-c4ea-46e8-9351-a2c9bb672fc0 to disappear Nov 16 09:35:14.230: INFO: Pod downwardapi-volume-ccbe6bf2-c4ea-46e8-9351-a2c9bb672fc0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:35:14.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7267" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:35:14.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Nov 16 09:35:14.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config api-versions' Nov 16 09:35:14.547: INFO: stderr: "" Nov 16 09:35:14.547: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:35:14.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2703" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":101,"skipped":1883,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:35:14.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:35:14.633: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6177 I1116 09:35:14.644900 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6177, replica count: 1 I1116 09:35:15.695200 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:35:16.695436 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:35:17.695623 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:35:18.695835 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:35:19.696064 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:35:20.696238 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:35:20.825: INFO: Created: latency-svc-v69wr Nov 16 09:35:20.836: INFO: Got endpoints: latency-svc-v69wr [40.567348ms] Nov 16 09:35:20.977: INFO: Created: latency-svc-jrqwc Nov 16 09:35:21.011: INFO: Got endpoints: latency-svc-jrqwc [174.699537ms] Nov 16 09:35:21.012: INFO: Created: latency-svc-bbkjp Nov 16 09:35:21.047: INFO: Got endpoints: latency-svc-bbkjp [210.251326ms] Nov 16 09:35:21.163: INFO: Created: latency-svc-fh4qh Nov 16 09:35:21.166: INFO: Got endpoints: latency-svc-fh4qh [329.752237ms] Nov 16 09:35:21.349: INFO: Created: latency-svc-snpkn Nov 16 09:35:21.388: INFO: Got endpoints: latency-svc-snpkn [551.441333ms] Nov 16 09:35:21.413: INFO: Created: latency-svc-qhsd8 Nov 16 09:35:21.474: INFO: Got endpoints: latency-svc-qhsd8 [637.017684ms] Nov 16 09:35:21.521: INFO: Created: latency-svc-7pf72 Nov 16 09:35:21.538: INFO: Got endpoints: latency-svc-7pf72 [701.343265ms] Nov 16 09:35:21.624: INFO: Created: latency-svc-rdjmf Nov 16 09:35:21.634: INFO: Got endpoints: latency-svc-rdjmf [797.035732ms] Nov 16 09:35:21.659: INFO: Created: latency-svc-kbsn7 Nov 16 09:35:21.674: INFO: Got endpoints: latency-svc-kbsn7 [837.239866ms] Nov 16 09:35:21.696: INFO: Created: latency-svc-8k29b Nov 16 09:35:21.767: INFO: Got endpoints: latency-svc-8k29b [930.616738ms] Nov 16 09:35:21.785: INFO: Created: latency-svc-stzwn Nov 16 09:35:21.803: INFO: Got endpoints: latency-svc-stzwn [965.750422ms] Nov 16 09:35:21.833: INFO: Created: latency-svc-2qmwg Nov 16 09:35:21.857: INFO: Got endpoints: latency-svc-2qmwg [1.020268482s] Nov 16 09:35:21.917: INFO: Created: latency-svc-rx22v Nov 16 09:35:21.953: INFO: Got endpoints: latency-svc-rx22v [1.116468179s] Nov 16 09:35:22.002: INFO: Created: latency-svc-kz5dt Nov 16 09:35:22.043: INFO: Got endpoints: latency-svc-kz5dt [1.205849011s] Nov 16 09:35:22.067: INFO: Created: latency-svc-4skrc Nov 16 09:35:22.096: INFO: Got endpoints: latency-svc-4skrc [1.258825945s] Nov 16 09:35:22.205: INFO: Created: latency-svc-wvcqz Nov 16 09:35:22.216: INFO: Got endpoints: latency-svc-wvcqz [1.378763129s] Nov 16 09:35:22.240: INFO: Created: latency-svc-6vz8t Nov 16 09:35:22.258: INFO: Got endpoints: latency-svc-6vz8t [1.246665608s] Nov 16 09:35:22.343: INFO: Created: latency-svc-2bk7b Nov 16 09:35:22.366: INFO: Got endpoints: latency-svc-2bk7b [1.319610128s] Nov 16 09:35:22.403: INFO: Created: latency-svc-8qbln Nov 16 09:35:22.414: INFO: Got endpoints: latency-svc-8qbln [1.248074345s] Nov 16 09:35:22.504: INFO: Created: latency-svc-tntxn Nov 16 09:35:22.507: INFO: Got endpoints: latency-svc-tntxn [1.119325631s] Nov 16 09:35:22.582: INFO: Created: latency-svc-zbmnp Nov 16 09:35:22.595: INFO: Got endpoints: latency-svc-zbmnp [1.120928709s] Nov 16 09:35:22.648: INFO: Created: latency-svc-6q5wr Nov 16 09:35:22.651: INFO: Got endpoints: latency-svc-6q5wr [1.112757717s] Nov 16 09:35:22.685: INFO: Created: latency-svc-g2rpj Nov 16 09:35:22.697: INFO: Got endpoints: latency-svc-g2rpj [1.063485778s] Nov 16 09:35:22.745: INFO: Created: latency-svc-zddsh Nov 16 09:35:22.809: INFO: Got endpoints: latency-svc-zddsh [1.134680582s] Nov 16 09:35:22.813: INFO: Created: latency-svc-pp29g Nov 16 09:35:22.818: INFO: Got endpoints: latency-svc-pp29g [1.050307021s] Nov 16 09:35:22.890: INFO: Created: latency-svc-cz6k7 Nov 16 09:35:22.902: INFO: Got endpoints: latency-svc-cz6k7 [1.099347062s] Nov 16 09:35:22.954: INFO: Created: latency-svc-rnqp4 Nov 16 09:35:22.958: INFO: Got endpoints: latency-svc-rnqp4 [1.100414539s] Nov 16 09:35:23.009: INFO: Created: latency-svc-4g9v7 Nov 16 09:35:23.043: INFO: Got endpoints: latency-svc-4g9v7 [1.089774121s] Nov 16 09:35:23.097: INFO: Created: latency-svc-vp7p2 Nov 16 09:35:23.111: INFO: Got endpoints: latency-svc-vp7p2 [1.067752191s] Nov 16 09:35:23.141: INFO: Created: latency-svc-jvn7p Nov 16 09:35:23.149: INFO: Got endpoints: latency-svc-jvn7p [1.053859039s] Nov 16 09:35:23.170: INFO: Created: latency-svc-sf2rz Nov 16 09:35:23.180: INFO: Got endpoints: latency-svc-sf2rz [964.16637ms] Nov 16 09:35:23.228: INFO: Created: latency-svc-rxvdr Nov 16 09:35:23.232: INFO: Got endpoints: latency-svc-rxvdr [974.32961ms] Nov 16 09:35:23.273: INFO: Created: latency-svc-gpjc9 Nov 16 09:35:23.289: INFO: Got endpoints: latency-svc-gpjc9 [922.020253ms] Nov 16 09:35:23.308: INFO: Created: latency-svc-8brcn Nov 16 09:35:23.324: INFO: Got endpoints: latency-svc-8brcn [909.839805ms] Nov 16 09:35:23.421: INFO: Created: latency-svc-5d8b9 Nov 16 09:35:23.465: INFO: Created: latency-svc-s7tql Nov 16 09:35:23.465: INFO: Got endpoints: latency-svc-5d8b9 [957.945247ms] Nov 16 09:35:23.495: INFO: Got endpoints: latency-svc-s7tql [900.459994ms] Nov 16 09:35:23.564: INFO: Created: latency-svc-hr4rc Nov 16 09:35:23.567: INFO: Got endpoints: latency-svc-hr4rc [916.193764ms] Nov 16 09:35:23.591: INFO: Created: latency-svc-47bdx Nov 16 09:35:23.607: INFO: Got endpoints: latency-svc-47bdx [909.977337ms] Nov 16 09:35:23.626: INFO: Created: latency-svc-nvpbn Nov 16 09:35:23.638: INFO: Got endpoints: latency-svc-nvpbn [829.059434ms] Nov 16 09:35:23.656: INFO: Created: latency-svc-lxv7n Nov 16 09:35:23.713: INFO: Got endpoints: latency-svc-lxv7n [895.382471ms] Nov 16 09:35:23.731: INFO: Created: latency-svc-m4hzh Nov 16 09:35:23.753: INFO: Got endpoints: latency-svc-m4hzh [850.708817ms] Nov 16 09:35:23.778: INFO: Created: latency-svc-fxrm6 Nov 16 09:35:23.788: INFO: Got endpoints: latency-svc-fxrm6 [830.591897ms] Nov 16 09:35:23.813: INFO: Created: latency-svc-8d8f8 Nov 16 09:35:23.881: INFO: Got endpoints: latency-svc-8d8f8 [837.849582ms] Nov 16 09:35:23.890: INFO: Created: latency-svc-4gcdc Nov 16 09:35:23.897: INFO: Got endpoints: latency-svc-4gcdc [108.646605ms] Nov 16 09:35:23.938: INFO: Created: latency-svc-8dsx6 Nov 16 09:35:23.969: INFO: Got endpoints: latency-svc-8dsx6 [858.79764ms] Nov 16 09:35:24.043: INFO: Created: latency-svc-8sl8b Nov 16 09:35:24.052: INFO: Got endpoints: latency-svc-8sl8b [902.706167ms] Nov 16 09:35:24.083: INFO: Created: latency-svc-d9j85 Nov 16 09:35:24.106: INFO: Got endpoints: latency-svc-d9j85 [926.068331ms] Nov 16 09:35:24.142: INFO: Created: latency-svc-r47bl Nov 16 09:35:24.204: INFO: Got endpoints: latency-svc-r47bl [971.791424ms] Nov 16 09:35:24.227: INFO: Created: latency-svc-vqt65 Nov 16 09:35:24.257: INFO: Got endpoints: latency-svc-vqt65 [968.180386ms] Nov 16 09:35:24.304: INFO: Created: latency-svc-6fqzl Nov 16 09:35:24.348: INFO: Got endpoints: latency-svc-6fqzl [1.023518936s] Nov 16 09:35:24.376: INFO: Created: latency-svc-cj7dk Nov 16 09:35:24.398: INFO: Got endpoints: latency-svc-cj7dk [932.039043ms] Nov 16 09:35:24.425: INFO: Created: latency-svc-2bqvr Nov 16 09:35:24.439: INFO: Got endpoints: latency-svc-2bqvr [943.909568ms] Nov 16 09:35:24.562: INFO: Created: latency-svc-glgw4 Nov 16 09:35:24.590: INFO: Got endpoints: latency-svc-glgw4 [1.022367042s] Nov 16 09:35:24.623: INFO: Created: latency-svc-qjt84 Nov 16 09:35:24.690: INFO: Got endpoints: latency-svc-qjt84 [1.082192617s] Nov 16 09:35:24.692: INFO: Created: latency-svc-4xfs6 Nov 16 09:35:24.697: INFO: Got endpoints: latency-svc-4xfs6 [1.059414564s] Nov 16 09:35:24.722: INFO: Created: latency-svc-jgd2z Nov 16 09:35:24.766: INFO: Got endpoints: latency-svc-jgd2z [1.05307564s] Nov 16 09:35:24.834: INFO: Created: latency-svc-9sjcb Nov 16 09:35:24.842: INFO: Got endpoints: latency-svc-9sjcb [1.089413209s] Nov 16 09:35:24.862: INFO: Created: latency-svc-84br6 Nov 16 09:35:24.878: INFO: Got endpoints: latency-svc-84br6 [997.066214ms] Nov 16 09:35:24.898: INFO: Created: latency-svc-vgmqz Nov 16 09:35:24.915: INFO: Got endpoints: latency-svc-vgmqz [1.017698732s] Nov 16 09:35:24.971: INFO: Created: latency-svc-f9lhv Nov 16 09:35:24.994: INFO: Got endpoints: latency-svc-f9lhv [1.024431791s] Nov 16 09:35:24.994: INFO: Created: latency-svc-cs5m8 Nov 16 09:35:25.006: INFO: Got endpoints: latency-svc-cs5m8 [953.433569ms] Nov 16 09:35:25.023: INFO: Created: latency-svc-kzsr9 Nov 16 09:35:25.048: INFO: Got endpoints: latency-svc-kzsr9 [942.319104ms] Nov 16 09:35:25.115: INFO: Created: latency-svc-m82qd Nov 16 09:35:25.119: INFO: Got endpoints: latency-svc-m82qd [914.914354ms] Nov 16 09:35:25.144: INFO: Created: latency-svc-xk5ls Nov 16 09:35:25.157: INFO: Got endpoints: latency-svc-xk5ls [899.771164ms] Nov 16 09:35:25.174: INFO: Created: latency-svc-bwlsn Nov 16 09:35:25.187: INFO: Got endpoints: latency-svc-bwlsn [838.640498ms] Nov 16 09:35:25.210: INFO: Created: latency-svc-8lldc Nov 16 09:35:25.258: INFO: Got endpoints: latency-svc-8lldc [860.483621ms] Nov 16 09:35:25.270: INFO: Created: latency-svc-s9znw Nov 16 09:35:25.283: INFO: Got endpoints: latency-svc-s9znw [843.42672ms] Nov 16 09:35:25.306: INFO: Created: latency-svc-m8rcr Nov 16 09:35:25.320: INFO: Got endpoints: latency-svc-m8rcr [729.898934ms] Nov 16 09:35:25.342: INFO: Created: latency-svc-w2ftl Nov 16 09:35:25.414: INFO: Got endpoints: latency-svc-w2ftl [724.221635ms] Nov 16 09:35:25.424: INFO: Created: latency-svc-7cfpr Nov 16 09:35:25.450: INFO: Got endpoints: latency-svc-7cfpr [752.301783ms] Nov 16 09:35:25.482: INFO: Created: latency-svc-smj2t Nov 16 09:35:25.494: INFO: Got endpoints: latency-svc-smj2t [727.66369ms] Nov 16 09:35:25.546: INFO: Created: latency-svc-9b4dw Nov 16 09:35:25.550: INFO: Got endpoints: latency-svc-9b4dw [707.338614ms] Nov 16 09:35:25.575: INFO: Created: latency-svc-lz4f6 Nov 16 09:35:25.585: INFO: Got endpoints: latency-svc-lz4f6 [706.641245ms] Nov 16 09:35:25.607: INFO: Created: latency-svc-7zrlj Nov 16 09:35:25.615: INFO: Got endpoints: latency-svc-7zrlj [700.190698ms] Nov 16 09:35:25.636: INFO: Created: latency-svc-nmzh5 Nov 16 09:35:25.671: INFO: Got endpoints: latency-svc-nmzh5 [677.248907ms] Nov 16 09:35:25.684: INFO: Created: latency-svc-jxpc4 Nov 16 09:35:25.700: INFO: Got endpoints: latency-svc-jxpc4 [693.963994ms] Nov 16 09:35:25.719: INFO: Created: latency-svc-5ngsm Nov 16 09:35:25.736: INFO: Got endpoints: latency-svc-5ngsm [687.746002ms] Nov 16 09:35:25.755: INFO: Created: latency-svc-9pskp Nov 16 09:35:25.822: INFO: Got endpoints: latency-svc-9pskp [702.192281ms] Nov 16 09:35:25.839: INFO: Created: latency-svc-ckn9d Nov 16 09:35:25.854: INFO: Got endpoints: latency-svc-ckn9d [697.564985ms] Nov 16 09:35:25.869: INFO: Created: latency-svc-hb9v2 Nov 16 09:35:25.882: INFO: Got endpoints: latency-svc-hb9v2 [695.129905ms] Nov 16 09:35:25.900: INFO: Created: latency-svc-wrdrs Nov 16 09:35:25.911: INFO: Got endpoints: latency-svc-wrdrs [652.990021ms] Nov 16 09:35:25.983: INFO: Created: latency-svc-v94rk Nov 16 09:35:25.986: INFO: Got endpoints: latency-svc-v94rk [703.291942ms] Nov 16 09:35:26.068: INFO: Created: latency-svc-7sjn7 Nov 16 09:35:26.081: INFO: Got endpoints: latency-svc-7sjn7 [761.839777ms] Nov 16 09:35:26.146: INFO: Created: latency-svc-l7v9s Nov 16 09:35:26.175: INFO: Got endpoints: latency-svc-l7v9s [760.981019ms] Nov 16 09:35:26.224: INFO: Created: latency-svc-wvmgv Nov 16 09:35:26.313: INFO: Got endpoints: latency-svc-wvmgv [862.723362ms] Nov 16 09:35:26.343: INFO: Created: latency-svc-45jv2 Nov 16 09:35:26.357: INFO: Got endpoints: latency-svc-45jv2 [863.27253ms] Nov 16 09:35:26.397: INFO: Created: latency-svc-jxztb Nov 16 09:35:26.450: INFO: Got endpoints: latency-svc-jxztb [900.650803ms] Nov 16 09:35:26.463: INFO: Created: latency-svc-2ckxh Nov 16 09:35:26.489: INFO: Got endpoints: latency-svc-2ckxh [904.169671ms] Nov 16 09:35:26.523: INFO: Created: latency-svc-4t84s Nov 16 09:35:26.624: INFO: Got endpoints: latency-svc-4t84s [1.008746043s] Nov 16 09:35:26.631: INFO: Created: latency-svc-kbvnm Nov 16 09:35:26.673: INFO: Got endpoints: latency-svc-kbvnm [1.002073109s] Nov 16 09:35:26.704: INFO: Created: latency-svc-tg6kb Nov 16 09:35:26.761: INFO: Got endpoints: latency-svc-tg6kb [1.061603795s] Nov 16 09:35:26.800: INFO: Created: latency-svc-727gw Nov 16 09:35:26.814: INFO: Got endpoints: latency-svc-727gw [1.078020932s] Nov 16 09:35:26.836: INFO: Created: latency-svc-9phsg Nov 16 09:35:26.850: INFO: Got endpoints: latency-svc-9phsg [1.028639084s] Nov 16 09:35:26.893: INFO: Created: latency-svc-kd7cs Nov 16 09:35:26.919: INFO: Got endpoints: latency-svc-kd7cs [1.065117722s] Nov 16 09:35:26.921: INFO: Created: latency-svc-nk2pg Nov 16 09:35:26.935: INFO: Got endpoints: latency-svc-nk2pg [1.052874547s] Nov 16 09:35:26.955: INFO: Created: latency-svc-qd2qn Nov 16 09:35:26.971: INFO: Got endpoints: latency-svc-qd2qn [1.059814451s] Nov 16 09:35:26.991: INFO: Created: latency-svc-sfmqc Nov 16 09:35:27.031: INFO: Got endpoints: latency-svc-sfmqc [1.044407595s] Nov 16 09:35:27.045: INFO: Created: latency-svc-fvzwk Nov 16 09:35:27.062: INFO: Got endpoints: latency-svc-fvzwk [980.365167ms] Nov 16 09:35:27.081: INFO: Created: latency-svc-2wpfq Nov 16 09:35:27.092: INFO: Got endpoints: latency-svc-2wpfq [917.252286ms] Nov 16 09:35:27.111: INFO: Created: latency-svc-g9svn Nov 16 09:35:27.181: INFO: Got endpoints: latency-svc-g9svn [868.70184ms] Nov 16 09:35:27.196: INFO: Created: latency-svc-vv2mr Nov 16 09:35:27.207: INFO: Got endpoints: latency-svc-vv2mr [849.401457ms] Nov 16 09:35:27.226: INFO: Created: latency-svc-svmld Nov 16 09:35:27.237: INFO: Got endpoints: latency-svc-svmld [786.838765ms] Nov 16 09:35:27.255: INFO: Created: latency-svc-9nqx8 Nov 16 09:35:27.269: INFO: Got endpoints: latency-svc-9nqx8 [780.033157ms] Nov 16 09:35:27.312: INFO: Created: latency-svc-qcht8 Nov 16 09:35:27.317: INFO: Got endpoints: latency-svc-qcht8 [693.408285ms] Nov 16 09:35:27.345: INFO: Created: latency-svc-s7lfr Nov 16 09:35:27.375: INFO: Got endpoints: latency-svc-s7lfr [701.803764ms] Nov 16 09:35:27.405: INFO: Created: latency-svc-9mwwg Nov 16 09:35:27.444: INFO: Got endpoints: latency-svc-9mwwg [682.158125ms] Nov 16 09:35:27.459: INFO: Created: latency-svc-h6xj4 Nov 16 09:35:27.475: INFO: Got endpoints: latency-svc-h6xj4 [660.28133ms] Nov 16 09:35:27.495: INFO: Created: latency-svc-dlwnt Nov 16 09:35:27.504: INFO: Got endpoints: latency-svc-dlwnt [653.969156ms] Nov 16 09:35:27.525: INFO: Created: latency-svc-hcbk6 Nov 16 09:35:27.535: INFO: Got endpoints: latency-svc-hcbk6 [615.714125ms] Nov 16 09:35:27.582: INFO: Created: latency-svc-nkp7r Nov 16 09:35:27.603: INFO: Got endpoints: latency-svc-nkp7r [667.946292ms] Nov 16 09:35:27.645: INFO: Created: latency-svc-79ndn Nov 16 09:35:27.661: INFO: Got endpoints: latency-svc-79ndn [690.173345ms] Nov 16 09:35:27.681: INFO: Created: latency-svc-8q8p6 Nov 16 09:35:27.731: INFO: Got endpoints: latency-svc-8q8p6 [700.575904ms] Nov 16 09:35:27.747: INFO: Created: latency-svc-5bb5t Nov 16 09:35:27.764: INFO: Got endpoints: latency-svc-5bb5t [701.78104ms] Nov 16 09:35:27.789: INFO: Created: latency-svc-mlnkf Nov 16 09:35:27.800: INFO: Got endpoints: latency-svc-mlnkf [707.193923ms] Nov 16 09:35:27.819: INFO: Created: latency-svc-h8mfp Nov 16 09:35:27.863: INFO: Got endpoints: latency-svc-h8mfp [681.345669ms] Nov 16 09:35:27.882: INFO: Created: latency-svc-2kbrl Nov 16 09:35:27.891: INFO: Got endpoints: latency-svc-2kbrl [683.58917ms] Nov 16 09:35:28.411: INFO: Created: latency-svc-w6446 Nov 16 09:35:28.442: INFO: Got endpoints: latency-svc-w6446 [1.204624409s] Nov 16 09:35:28.443: INFO: Created: latency-svc-x68hz Nov 16 09:35:28.465: INFO: Got endpoints: latency-svc-x68hz [1.196264416s] Nov 16 09:35:28.570: INFO: Created: latency-svc-dmzdw Nov 16 09:35:28.587: INFO: Got endpoints: latency-svc-dmzdw [1.269899203s] Nov 16 09:35:28.622: INFO: Created: latency-svc-xhvjr Nov 16 09:35:28.635: INFO: Got endpoints: latency-svc-xhvjr [1.259961222s] Nov 16 09:35:28.654: INFO: Created: latency-svc-fzxwr Nov 16 09:35:28.714: INFO: Got endpoints: latency-svc-fzxwr [1.269987255s] Nov 16 09:35:28.731: INFO: Created: latency-svc-5ll2f Nov 16 09:35:28.750: INFO: Got endpoints: latency-svc-5ll2f [1.274760127s] Nov 16 09:35:28.779: INFO: Created: latency-svc-zd5tf Nov 16 09:35:28.798: INFO: Got endpoints: latency-svc-zd5tf [1.293038717s] Nov 16 09:35:28.887: INFO: Created: latency-svc-2vqcr Nov 16 09:35:28.918: INFO: Got endpoints: latency-svc-2vqcr [1.382648387s] Nov 16 09:35:28.959: INFO: Created: latency-svc-t27wb Nov 16 09:35:29.013: INFO: Got endpoints: latency-svc-t27wb [1.410373254s] Nov 16 09:35:29.018: INFO: Created: latency-svc-j2mfd Nov 16 09:35:29.042: INFO: Got endpoints: latency-svc-j2mfd [1.380525754s] Nov 16 09:35:29.073: INFO: Created: latency-svc-bn2kl Nov 16 09:35:29.080: INFO: Got endpoints: latency-svc-bn2kl [1.348661742s] Nov 16 09:35:29.102: INFO: Created: latency-svc-zs4qp Nov 16 09:35:29.163: INFO: Got endpoints: latency-svc-zs4qp [1.399056866s] Nov 16 09:35:29.186: INFO: Created: latency-svc-bzd2x Nov 16 09:35:29.201: INFO: Got endpoints: latency-svc-bzd2x [1.400971052s] Nov 16 09:35:29.221: INFO: Created: latency-svc-6fwgg Nov 16 09:35:29.252: INFO: Got endpoints: latency-svc-6fwgg [1.388782149s] Nov 16 09:35:29.306: INFO: Created: latency-svc-fhbgr Nov 16 09:35:29.315: INFO: Got endpoints: latency-svc-fhbgr [1.424244306s] Nov 16 09:35:29.336: INFO: Created: latency-svc-5b6ww Nov 16 09:35:29.345: INFO: Got endpoints: latency-svc-5b6ww [903.29827ms] Nov 16 09:35:29.366: INFO: Created: latency-svc-56m7t Nov 16 09:35:29.376: INFO: Got endpoints: latency-svc-56m7t [910.086436ms] Nov 16 09:35:29.396: INFO: Created: latency-svc-99g4v Nov 16 09:35:29.437: INFO: Got endpoints: latency-svc-99g4v [850.146843ms] Nov 16 09:35:29.456: INFO: Created: latency-svc-66lp6 Nov 16 09:35:29.486: INFO: Got endpoints: latency-svc-66lp6 [850.314599ms] Nov 16 09:35:29.522: INFO: Created: latency-svc-ftklf Nov 16 09:35:29.576: INFO: Got endpoints: latency-svc-ftklf [862.103421ms] Nov 16 09:35:29.581: INFO: Created: latency-svc-qst8h Nov 16 09:35:29.587: INFO: Got endpoints: latency-svc-qst8h [837.290655ms] Nov 16 09:35:29.648: INFO: Created: latency-svc-wqbv6 Nov 16 09:35:29.671: INFO: Got endpoints: latency-svc-wqbv6 [873.894123ms] Nov 16 09:35:29.732: INFO: Created: latency-svc-bhfzs Nov 16 09:35:29.762: INFO: Got endpoints: latency-svc-bhfzs [843.608546ms] Nov 16 09:35:29.763: INFO: Created: latency-svc-rsp55 Nov 16 09:35:29.786: INFO: Got endpoints: latency-svc-rsp55 [772.368128ms] Nov 16 09:35:29.822: INFO: Created: latency-svc-qs2cf Nov 16 09:35:29.894: INFO: Got endpoints: latency-svc-qs2cf [851.784078ms] Nov 16 09:35:29.901: INFO: Created: latency-svc-l7nr7 Nov 16 09:35:29.905: INFO: Got endpoints: latency-svc-l7nr7 [825.203176ms] Nov 16 09:35:29.923: INFO: Created: latency-svc-5jcvj Nov 16 09:35:29.936: INFO: Got endpoints: latency-svc-5jcvj [772.643212ms] Nov 16 09:35:29.953: INFO: Created: latency-svc-xtthl Nov 16 09:35:29.977: INFO: Got endpoints: latency-svc-xtthl [776.135497ms] Nov 16 09:35:30.051: INFO: Created: latency-svc-j2jbt Nov 16 09:35:30.062: INFO: Got endpoints: latency-svc-j2jbt [810.208268ms] Nov 16 09:35:30.085: INFO: Created: latency-svc-gbkmz Nov 16 09:35:30.122: INFO: Got endpoints: latency-svc-gbkmz [807.06343ms] Nov 16 09:35:30.188: INFO: Created: latency-svc-csng7 Nov 16 09:35:30.195: INFO: Got endpoints: latency-svc-csng7 [849.499173ms] Nov 16 09:35:30.217: INFO: Created: latency-svc-2674j Nov 16 09:35:30.242: INFO: Got endpoints: latency-svc-2674j [865.960555ms] Nov 16 09:35:30.324: INFO: Created: latency-svc-wklr6 Nov 16 09:35:30.333: INFO: Got endpoints: latency-svc-wklr6 [895.632994ms] Nov 16 09:35:30.361: INFO: Created: latency-svc-qjc75 Nov 16 09:35:30.398: INFO: Got endpoints: latency-svc-qjc75 [911.967775ms] Nov 16 09:35:30.499: INFO: Created: latency-svc-6pkvh Nov 16 09:35:30.513: INFO: Got endpoints: latency-svc-6pkvh [937.432824ms] Nov 16 09:35:30.571: INFO: Created: latency-svc-qqn5s Nov 16 09:35:30.690: INFO: Got endpoints: latency-svc-qqn5s [1.102953772s] Nov 16 09:35:30.691: INFO: Created: latency-svc-lgftk Nov 16 09:35:30.695: INFO: Got endpoints: latency-svc-lgftk [1.023195477s] Nov 16 09:35:30.721: INFO: Created: latency-svc-kdq9p Nov 16 09:35:30.736: INFO: Got endpoints: latency-svc-kdq9p [974.492574ms] Nov 16 09:35:30.775: INFO: Created: latency-svc-626vj Nov 16 09:35:30.858: INFO: Got endpoints: latency-svc-626vj [1.071761211s] Nov 16 09:35:30.926: INFO: Created: latency-svc-fnhjs Nov 16 09:35:30.944: INFO: Got endpoints: latency-svc-fnhjs [1.050294258s] Nov 16 09:35:31.013: INFO: Created: latency-svc-nqpd4 Nov 16 09:35:31.025: INFO: Got endpoints: latency-svc-nqpd4 [1.120164169s] Nov 16 09:35:31.044: INFO: Created: latency-svc-xwkhw Nov 16 09:35:31.061: INFO: Got endpoints: latency-svc-xwkhw [1.125822564s] Nov 16 09:35:31.081: INFO: Created: latency-svc-bq98x Nov 16 09:35:31.111: INFO: Got endpoints: latency-svc-bq98x [1.133755478s] Nov 16 09:35:31.169: INFO: Created: latency-svc-6z5j8 Nov 16 09:35:31.173: INFO: Got endpoints: latency-svc-6z5j8 [1.111402755s] Nov 16 09:35:31.219: INFO: Created: latency-svc-xpptr Nov 16 09:35:31.243: INFO: Got endpoints: latency-svc-xpptr [1.120823101s] Nov 16 09:35:31.319: INFO: Created: latency-svc-f7dw2 Nov 16 09:35:31.323: INFO: Got endpoints: latency-svc-f7dw2 [1.128086701s] Nov 16 09:35:31.351: INFO: Created: latency-svc-9vfnl Nov 16 09:35:31.363: INFO: Got endpoints: latency-svc-9vfnl [1.121123206s] Nov 16 09:35:31.399: INFO: Created: latency-svc-5kf5f Nov 16 09:35:31.411: INFO: Got endpoints: latency-svc-5kf5f [1.078381821s] Nov 16 09:35:31.444: INFO: Created: latency-svc-dqr59 Nov 16 09:35:31.453: INFO: Got endpoints: latency-svc-dqr59 [1.055067565s] Nov 16 09:35:31.483: INFO: Created: latency-svc-b4s8q Nov 16 09:35:31.495: INFO: Got endpoints: latency-svc-b4s8q [981.736664ms] Nov 16 09:35:31.513: INFO: Created: latency-svc-d2gxc Nov 16 09:35:31.537: INFO: Got endpoints: latency-svc-d2gxc [846.696876ms] Nov 16 09:35:31.588: INFO: Created: latency-svc-4j8zq Nov 16 09:35:31.598: INFO: Got endpoints: latency-svc-4j8zq [903.092897ms] Nov 16 09:35:31.621: INFO: Created: latency-svc-5rvnw Nov 16 09:35:31.645: INFO: Got endpoints: latency-svc-5rvnw [908.859964ms] Nov 16 09:35:31.676: INFO: Created: latency-svc-4qj6p Nov 16 09:35:31.713: INFO: Got endpoints: latency-svc-4qj6p [855.723001ms] Nov 16 09:35:31.723: INFO: Created: latency-svc-95hzr Nov 16 09:35:31.737: INFO: Got endpoints: latency-svc-95hzr [793.078001ms] Nov 16 09:35:31.759: INFO: Created: latency-svc-pjgtc Nov 16 09:35:31.774: INFO: Got endpoints: latency-svc-pjgtc [748.406308ms] Nov 16 09:35:31.797: INFO: Created: latency-svc-d6z2m Nov 16 09:35:31.811: INFO: Got endpoints: latency-svc-d6z2m [749.255493ms] Nov 16 09:35:31.858: INFO: Created: latency-svc-blvh2 Nov 16 09:35:31.865: INFO: Got endpoints: latency-svc-blvh2 [753.753372ms] Nov 16 09:35:31.884: INFO: Created: latency-svc-w9bkm Nov 16 09:35:31.901: INFO: Got endpoints: latency-svc-w9bkm [727.155425ms] Nov 16 09:35:31.927: INFO: Created: latency-svc-n5882 Nov 16 09:35:31.937: INFO: Got endpoints: latency-svc-n5882 [693.712618ms] Nov 16 09:35:31.957: INFO: Created: latency-svc-smtsd Nov 16 09:35:32.019: INFO: Got endpoints: latency-svc-smtsd [695.818049ms] Nov 16 09:35:32.028: INFO: Created: latency-svc-2qphh Nov 16 09:35:32.045: INFO: Got endpoints: latency-svc-2qphh [682.078152ms] Nov 16 09:35:32.082: INFO: Created: latency-svc-47jgq Nov 16 09:35:32.118: INFO: Got endpoints: latency-svc-47jgq [706.789838ms] Nov 16 09:35:32.195: INFO: Created: latency-svc-pbnk2 Nov 16 09:35:32.202: INFO: Got endpoints: latency-svc-pbnk2 [748.961321ms] Nov 16 09:35:32.220: INFO: Created: latency-svc-nrqf9 Nov 16 09:35:32.245: INFO: Got endpoints: latency-svc-nrqf9 [749.270662ms] Nov 16 09:35:32.275: INFO: Created: latency-svc-twpw8 Nov 16 09:35:32.286: INFO: Got endpoints: latency-svc-twpw8 [749.372044ms] Nov 16 09:35:32.348: INFO: Created: latency-svc-8ff74 Nov 16 09:35:32.358: INFO: Got endpoints: latency-svc-8ff74 [760.203649ms] Nov 16 09:35:32.399: INFO: Created: latency-svc-bk4nm Nov 16 09:35:32.406: INFO: Got endpoints: latency-svc-bk4nm [761.007807ms] Nov 16 09:35:32.437: INFO: Created: latency-svc-9gnw6 Nov 16 09:35:32.486: INFO: Got endpoints: latency-svc-9gnw6 [772.786721ms] Nov 16 09:35:32.502: INFO: Created: latency-svc-n872g Nov 16 09:35:32.522: INFO: Got endpoints: latency-svc-n872g [784.596432ms] Nov 16 09:35:32.563: INFO: Created: latency-svc-dv64g Nov 16 09:35:32.618: INFO: Got endpoints: latency-svc-dv64g [844.167206ms] Nov 16 09:35:32.634: INFO: Created: latency-svc-lf8dz Nov 16 09:35:32.664: INFO: Got endpoints: latency-svc-lf8dz [853.218611ms] Nov 16 09:35:32.694: INFO: Created: latency-svc-rm7vq Nov 16 09:35:32.714: INFO: Got endpoints: latency-svc-rm7vq [849.771258ms] Nov 16 09:35:33.080: INFO: Created: latency-svc-6f2m5 Nov 16 09:35:33.119: INFO: Got endpoints: latency-svc-6f2m5 [1.218517534s] Nov 16 09:35:33.588: INFO: Created: latency-svc-rrzl8 Nov 16 09:35:33.599: INFO: Got endpoints: latency-svc-rrzl8 [1.662394354s] Nov 16 09:35:33.636: INFO: Created: latency-svc-xqq64 Nov 16 09:35:33.649: INFO: Got endpoints: latency-svc-xqq64 [1.630258041s] Nov 16 09:35:33.671: INFO: Created: latency-svc-kwll4 Nov 16 09:35:33.756: INFO: Got endpoints: latency-svc-kwll4 [1.710543681s] Nov 16 09:35:33.816: INFO: Created: latency-svc-9vhr6 Nov 16 09:35:33.845: INFO: Got endpoints: latency-svc-9vhr6 [1.726778044s] Nov 16 09:35:33.918: INFO: Created: latency-svc-pzlbh Nov 16 09:35:33.926: INFO: Got endpoints: latency-svc-pzlbh [1.723570451s] Nov 16 09:35:33.947: INFO: Created: latency-svc-fwtfq Nov 16 09:35:33.962: INFO: Got endpoints: latency-svc-fwtfq [1.717724942s] Nov 16 09:35:34.002: INFO: Created: latency-svc-snjrz Nov 16 09:35:34.055: INFO: Got endpoints: latency-svc-snjrz [1.768542578s] Nov 16 09:35:34.084: INFO: Created: latency-svc-82tx9 Nov 16 09:35:34.101: INFO: Got endpoints: latency-svc-82tx9 [1.742596609s] Nov 16 09:35:34.127: INFO: Created: latency-svc-2prjb Nov 16 09:35:34.139: INFO: Got endpoints: latency-svc-2prjb [1.732342756s] Nov 16 09:35:34.193: INFO: Created: latency-svc-2jl8l Nov 16 09:35:34.197: INFO: Got endpoints: latency-svc-2jl8l [1.710887492s] Nov 16 09:35:34.253: INFO: Created: latency-svc-svf68 Nov 16 09:35:34.283: INFO: Got endpoints: latency-svc-svf68 [1.760736501s] Nov 16 09:35:34.283: INFO: Latencies: [108.646605ms 174.699537ms 210.251326ms 329.752237ms 551.441333ms 615.714125ms 637.017684ms 652.990021ms 653.969156ms 660.28133ms 667.946292ms 677.248907ms 681.345669ms 682.078152ms 682.158125ms 683.58917ms 687.746002ms 690.173345ms 693.408285ms 693.712618ms 693.963994ms 695.129905ms 695.818049ms 697.564985ms 700.190698ms 700.575904ms 701.343265ms 701.78104ms 701.803764ms 702.192281ms 703.291942ms 706.641245ms 706.789838ms 707.193923ms 707.338614ms 724.221635ms 727.155425ms 727.66369ms 729.898934ms 748.406308ms 748.961321ms 749.255493ms 749.270662ms 749.372044ms 752.301783ms 753.753372ms 760.203649ms 760.981019ms 761.007807ms 761.839777ms 772.368128ms 772.643212ms 772.786721ms 776.135497ms 780.033157ms 784.596432ms 786.838765ms 793.078001ms 797.035732ms 807.06343ms 810.208268ms 825.203176ms 829.059434ms 830.591897ms 837.239866ms 837.290655ms 837.849582ms 838.640498ms 843.42672ms 843.608546ms 844.167206ms 846.696876ms 849.401457ms 849.499173ms 849.771258ms 850.146843ms 850.314599ms 850.708817ms 851.784078ms 853.218611ms 855.723001ms 858.79764ms 860.483621ms 862.103421ms 862.723362ms 863.27253ms 865.960555ms 868.70184ms 873.894123ms 895.382471ms 895.632994ms 899.771164ms 900.459994ms 900.650803ms 902.706167ms 903.092897ms 903.29827ms 904.169671ms 908.859964ms 909.839805ms 909.977337ms 910.086436ms 911.967775ms 914.914354ms 916.193764ms 917.252286ms 922.020253ms 926.068331ms 930.616738ms 932.039043ms 937.432824ms 942.319104ms 943.909568ms 953.433569ms 957.945247ms 964.16637ms 965.750422ms 968.180386ms 971.791424ms 974.32961ms 974.492574ms 980.365167ms 981.736664ms 997.066214ms 1.002073109s 1.008746043s 1.017698732s 1.020268482s 1.022367042s 1.023195477s 1.023518936s 1.024431791s 1.028639084s 1.044407595s 1.050294258s 1.050307021s 1.052874547s 1.05307564s 1.053859039s 1.055067565s 1.059414564s 1.059814451s 1.061603795s 1.063485778s 1.065117722s 1.067752191s 1.071761211s 1.078020932s 1.078381821s 1.082192617s 1.089413209s 1.089774121s 1.099347062s 1.100414539s 1.102953772s 1.111402755s 1.112757717s 1.116468179s 1.119325631s 1.120164169s 1.120823101s 1.120928709s 1.121123206s 1.125822564s 1.128086701s 1.133755478s 1.134680582s 1.196264416s 1.204624409s 1.205849011s 1.218517534s 1.246665608s 1.248074345s 1.258825945s 1.259961222s 1.269899203s 1.269987255s 1.274760127s 1.293038717s 1.319610128s 1.348661742s 1.378763129s 1.380525754s 1.382648387s 1.388782149s 1.399056866s 1.400971052s 1.410373254s 1.424244306s 1.630258041s 1.662394354s 1.710543681s 1.710887492s 1.717724942s 1.723570451s 1.726778044s 1.732342756s 1.742596609s 1.760736501s 1.768542578s] Nov 16 09:35:34.283: INFO: 50 %ile: 909.977337ms Nov 16 09:35:34.283: INFO: 90 %ile: 1.348661742s Nov 16 09:35:34.283: INFO: 99 %ile: 1.760736501s Nov 16 09:35:34.283: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:35:34.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6177" for this suite. • [SLOW TEST:19.789 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":102,"skipped":1884,"failed":0} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:35:34.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:35:34.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9077" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":103,"skipped":1886,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:35:34.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-ecd0c7d9-f0fa-45bc-8767-bd980924f731 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:35:34.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5038" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":104,"skipped":1904,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:35:34.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:35:34.789: INFO: Creating deployment "test-recreate-deployment" Nov 16 09:35:34.802: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Nov 16 09:35:34.828: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Nov 16 09:35:36.834: INFO: Waiting deployment "test-recreate-deployment" to complete Nov 16 09:35:36.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116134, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116134, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116134, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116134, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 09:35:38.841: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Nov 16 09:35:38.848: INFO: Updating deployment test-recreate-deployment Nov 16 09:35:38.848: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 16 09:35:39.860: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7285 /apis/apps/v1/namespaces/deployment-7285/deployments/test-recreate-deployment 1fe8348b-6777-43ab-9b3b-84ae2c98a611 9779071 2 2020-11-16 09:35:34 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-11-16 09:35:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-16 09:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0033280d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-11-16 09:35:39 +0000 UTC,LastTransitionTime:2020-11-16 09:35:39 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-11-16 09:35:39 +0000 UTC,LastTransitionTime:2020-11-16 09:35:34 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Nov 16 09:35:39.874: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-7285 /apis/apps/v1/namespaces/deployment-7285/replicasets/test-recreate-deployment-f79dd4667 38939e0e-c7f5-4782-a1ab-398ee6f08ebc 9779070 1 2020-11-16 09:35:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1fe8348b-6777-43ab-9b3b-84ae2c98a611 0xc003328aa0 0xc003328aa1}] [] [{kube-controller-manager Update apps/v1 2020-11-16 09:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fe8348b-6777-43ab-9b3b-84ae2c98a611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003328b28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 16 09:35:39.874: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Nov 16 09:35:39.874: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-7285 /apis/apps/v1/namespaces/deployment-7285/replicasets/test-recreate-deployment-c96cf48f deba1ec7-976a-49a0-906f-b4664e9b6f2c 9779061 2 2020-11-16 09:35:34 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1fe8348b-6777-43ab-9b3b-84ae2c98a611 0xc0033288df 0xc0033288f0}] [] [{kube-controller-manager Update apps/v1 2020-11-16 09:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1fe8348b-6777-43ab-9b3b-84ae2c98a611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003328a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 16 09:35:39.880: INFO: Pod "test-recreate-deployment-f79dd4667-2nf59" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-2nf59 test-recreate-deployment-f79dd4667- deployment-7285 /api/v1/namespaces/deployment-7285/pods/test-recreate-deployment-f79dd4667-2nf59 d1418d61-69a0-48c3-bbf0-5a93d56d01d5 9779074 0 2020-11-16 09:35:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 38939e0e-c7f5-4782-a1ab-398ee6f08ebc 0xc0033d80e0 0xc0033d80e1}] [] [{kube-controller-manager Update v1 2020-11-16 09:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"38939e0e-c7f5-4782-a1ab-398ee6f08ebc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 09:35:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cmxkl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cmxkl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cmxkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 09:35:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 09:35:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 09:35:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 09:35:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-11-16 09:35:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:35:39.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7285" for this suite. • [SLOW TEST:5.440 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":105,"skipped":1917,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:35:40.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9744 STEP: creating service affinity-clusterip-transition in namespace services-9744 STEP: creating replication controller affinity-clusterip-transition in namespace services-9744 I1116 09:35:40.367083 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9744, replica count: 3 I1116 09:35:43.417509 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:35:46.417652 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:35:49.417860 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:35:49.552: INFO: Creating new exec pod Nov 16 09:35:56.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-9744 execpod-affinityg6j5z -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Nov 16 09:35:57.145: INFO: stderr: "I1116 09:35:57.015395 1690 log.go:181] (0xc00057c8f0) (0xc000953040) Create stream\nI1116 09:35:57.015489 1690 log.go:181] (0xc00057c8f0) (0xc000953040) Stream added, broadcasting: 1\nI1116 09:35:57.018190 1690 log.go:181] (0xc00057c8f0) Reply frame received for 1\nI1116 09:35:57.018231 1690 log.go:181] (0xc00057c8f0) (0xc00055a0a0) Create stream\nI1116 09:35:57.018247 1690 log.go:181] (0xc00057c8f0) (0xc00055a0a0) Stream added, broadcasting: 3\nI1116 09:35:57.019014 1690 log.go:181] (0xc00057c8f0) Reply frame received for 3\nI1116 09:35:57.019044 1690 log.go:181] (0xc00057c8f0) (0xc0005c03c0) Create stream\nI1116 09:35:57.019053 1690 log.go:181] (0xc00057c8f0) (0xc0005c03c0) Stream added, broadcasting: 5\nI1116 09:35:57.019700 1690 log.go:181] (0xc00057c8f0) Reply frame received for 5\nI1116 09:35:57.135594 1690 log.go:181] (0xc00057c8f0) Data frame received for 5\nI1116 09:35:57.135632 1690 log.go:181] (0xc0005c03c0) (5) Data frame handling\nI1116 09:35:57.135661 1690 log.go:181] (0xc0005c03c0) (5) Data frame sent\nI1116 09:35:57.135676 1690 log.go:181] (0xc00057c8f0) Data frame received for 5\nI1116 09:35:57.135688 1690 log.go:181] (0xc0005c03c0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI1116 09:35:57.135815 1690 log.go:181] (0xc0005c03c0) (5) Data frame sent\nI1116 09:35:57.135858 1690 log.go:181] (0xc00057c8f0) Data frame received for 5\nI1116 09:35:57.135879 1690 log.go:181] (0xc0005c03c0) (5) Data frame handling\nI1116 09:35:57.135925 1690 log.go:181] (0xc00057c8f0) Data frame received for 3\nI1116 09:35:57.135946 1690 log.go:181] (0xc00055a0a0) (3) Data frame handling\nI1116 09:35:57.137929 1690 log.go:181] (0xc00057c8f0) Data frame received for 1\nI1116 09:35:57.137946 1690 log.go:181] (0xc000953040) (1) Data frame handling\nI1116 09:35:57.137955 1690 log.go:181] (0xc000953040) (1) Data frame sent\nI1116 09:35:57.137970 1690 log.go:181] (0xc00057c8f0) (0xc000953040) Stream removed, broadcasting: 1\nI1116 09:35:57.137989 1690 log.go:181] (0xc00057c8f0) Go away received\nI1116 09:35:57.138428 1690 log.go:181] (0xc00057c8f0) (0xc000953040) Stream removed, broadcasting: 1\nI1116 09:35:57.138446 1690 log.go:181] (0xc00057c8f0) (0xc00055a0a0) Stream removed, broadcasting: 3\nI1116 09:35:57.138456 1690 log.go:181] (0xc00057c8f0) (0xc0005c03c0) Stream removed, broadcasting: 5\n" Nov 16 09:35:57.145: INFO: stdout: "" Nov 16 09:35:57.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-9744 execpod-affinityg6j5z -- /bin/sh -x -c nc -zv -t -w 2 10.105.102.22 80' Nov 16 09:35:57.431: INFO: stderr: "I1116 09:35:57.344042 1708 log.go:181] (0xc00003a420) (0xc0009b0140) Create stream\nI1116 09:35:57.344108 1708 log.go:181] (0xc00003a420) (0xc0009b0140) Stream added, broadcasting: 1\nI1116 09:35:57.346246 1708 log.go:181] (0xc00003a420) Reply frame received for 1\nI1116 09:35:57.346302 1708 log.go:181] (0xc00003a420) (0xc0004b4140) Create stream\nI1116 09:35:57.346325 1708 log.go:181] (0xc00003a420) (0xc0004b4140) Stream added, broadcasting: 3\nI1116 09:35:57.348810 1708 log.go:181] (0xc00003a420) Reply frame received for 3\nI1116 09:35:57.348946 1708 log.go:181] (0xc00003a420) (0xc0006bfea0) Create stream\nI1116 09:35:57.348967 1708 log.go:181] (0xc00003a420) (0xc0006bfea0) Stream added, broadcasting: 5\nI1116 09:35:57.349834 1708 log.go:181] (0xc00003a420) Reply frame received for 5\nI1116 09:35:57.422887 1708 log.go:181] (0xc00003a420) Data frame received for 5\nI1116 09:35:57.422921 1708 log.go:181] (0xc0006bfea0) (5) Data frame handling\nI1116 09:35:57.422932 1708 log.go:181] (0xc0006bfea0) (5) Data frame sent\nI1116 09:35:57.422940 1708 log.go:181] (0xc00003a420) Data frame received for 5\nI1116 09:35:57.422948 1708 log.go:181] (0xc0006bfea0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.102.22 80\nConnection to 10.105.102.22 80 port [tcp/http] succeeded!\nI1116 09:35:57.422981 1708 log.go:181] (0xc00003a420) Data frame received for 3\nI1116 09:35:57.422994 1708 log.go:181] (0xc0004b4140) (3) Data frame handling\nI1116 09:35:57.424338 1708 log.go:181] (0xc00003a420) Data frame received for 1\nI1116 09:35:57.424383 1708 log.go:181] (0xc0009b0140) (1) Data frame handling\nI1116 09:35:57.424398 1708 log.go:181] (0xc0009b0140) (1) Data frame sent\nI1116 09:35:57.424416 1708 log.go:181] (0xc00003a420) (0xc0009b0140) Stream removed, broadcasting: 1\nI1116 09:35:57.424449 1708 log.go:181] (0xc00003a420) Go away received\nI1116 09:35:57.425109 1708 log.go:181] (0xc00003a420) (0xc0009b0140) Stream removed, broadcasting: 1\nI1116 09:35:57.425135 1708 log.go:181] (0xc00003a420) (0xc0004b4140) Stream removed, broadcasting: 3\nI1116 09:35:57.425144 1708 log.go:181] (0xc00003a420) (0xc0006bfea0) Stream removed, broadcasting: 5\n" Nov 16 09:35:57.431: INFO: stdout: "" Nov 16 09:35:57.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-9744 execpod-affinityg6j5z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.102.22:80/ ; done' Nov 16 09:35:57.789: INFO: stderr: "I1116 09:35:57.600971 1726 log.go:181] (0xc000f46dc0) (0xc000f0c820) Create stream\nI1116 09:35:57.601027 1726 log.go:181] (0xc000f46dc0) (0xc000f0c820) Stream added, broadcasting: 1\nI1116 09:35:57.605482 1726 log.go:181] (0xc000f46dc0) Reply frame received for 1\nI1116 09:35:57.605510 1726 log.go:181] (0xc000f46dc0) (0xc000ec6000) Create stream\nI1116 09:35:57.605518 1726 log.go:181] (0xc000f46dc0) (0xc000ec6000) Stream added, broadcasting: 3\nI1116 09:35:57.606198 1726 log.go:181] (0xc000f46dc0) Reply frame received for 3\nI1116 09:35:57.606226 1726 log.go:181] (0xc000f46dc0) (0xc0005a7f40) Create stream\nI1116 09:35:57.606247 1726 log.go:181] (0xc000f46dc0) (0xc0005a7f40) Stream added, broadcasting: 5\nI1116 09:35:57.606980 1726 log.go:181] (0xc000f46dc0) Reply frame received for 5\nI1116 09:35:57.661876 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.661925 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.661949 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.661985 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.662002 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.662030 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.684476 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.684507 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.684531 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.685304 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.685318 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.685327 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.685447 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.685467 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.685487 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.690379 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.690401 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.690416 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.691086 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.691110 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.691132 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.691171 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.691186 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.691200 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.697884 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.697908 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.697926 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.698515 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.698550 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.698562 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\nI1116 09:35:57.698571 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.698580 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.698600 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\nI1116 09:35:57.698610 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.698619 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.698633 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.705318 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.705342 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.705374 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.706068 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.706097 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.706109 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.706127 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.706137 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.706153 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.713237 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.713257 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.713289 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.714168 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.714216 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.714242 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.714272 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.714292 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.714318 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\nI1116 09:35:57.714343 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.714361 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.714408 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\nI1116 09:35:57.721325 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.721344 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.721354 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.722404 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.722440 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.722454 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.722472 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.722482 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.722500 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.729695 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.729719 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.729736 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.730944 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.730961 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.730974 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.730989 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.730998 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.731006 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.735064 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.735084 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.735097 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.735402 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.735415 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.735420 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.735428 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.735433 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.735437 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.738715 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.738735 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.738747 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.739027 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.739045 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.739051 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.739061 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.739066 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.739071 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.742653 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.742664 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.742670 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.743195 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.743215 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.743225 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.743238 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.743244 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.743251 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.747542 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.747555 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.747570 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.747941 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.747969 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.747981 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.747994 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.748004 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.748011 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.752171 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.752184 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.752190 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.752679 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.752709 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.752727 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.752742 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.752756 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.752770 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.758445 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.758465 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.758487 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.759163 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.759191 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.759204 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.759219 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.759227 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.759232 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.764407 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.764417 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.764424 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.765087 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.765119 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.765133 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.765156 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.765165 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.765173 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.771047 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.771067 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.771096 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.771429 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.771451 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.771470 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ echo\nI1116 09:35:57.771619 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.771634 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.771643 1726 log.go:181] (0xc0005a7f40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:35:57.771661 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.771680 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.771696 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.775902 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.775916 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.775924 1726 log.go:181] (0xc000ec6000) (3) Data frame sent\nI1116 09:35:57.777169 1726 log.go:181] (0xc000f46dc0) Data frame received for 3\nI1116 09:35:57.777210 1726 log.go:181] (0xc000ec6000) (3) Data frame handling\nI1116 09:35:57.777292 1726 log.go:181] (0xc000f46dc0) Data frame received for 5\nI1116 09:35:57.777319 1726 log.go:181] (0xc0005a7f40) (5) Data frame handling\nI1116 09:35:57.779497 1726 log.go:181] (0xc000f46dc0) Data frame received for 1\nI1116 09:35:57.779530 1726 log.go:181] (0xc000f0c820) (1) Data frame handling\nI1116 09:35:57.779550 1726 log.go:181] (0xc000f0c820) (1) Data frame sent\nI1116 09:35:57.779588 1726 log.go:181] (0xc000f46dc0) (0xc000f0c820) Stream removed, broadcasting: 1\nI1116 09:35:57.779778 1726 log.go:181] (0xc000f46dc0) Go away received\nI1116 09:35:57.780073 1726 log.go:181] (0xc000f46dc0) (0xc000f0c820) Stream removed, broadcasting: 1\nI1116 09:35:57.780093 1726 log.go:181] (0xc000f46dc0) (0xc000ec6000) Stream removed, broadcasting: 3\nI1116 09:35:57.780111 1726 log.go:181] (0xc000f46dc0) (0xc0005a7f40) Stream removed, broadcasting: 5\n" Nov 16 09:35:57.789: INFO: stdout: "\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl" Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:35:57.789: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:36:27.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-9744 execpod-affinityg6j5z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.102.22:80/ ; done' Nov 16 09:36:28.151: INFO: stderr: "I1116 09:36:27.937123 1744 log.go:181] (0xc00053a000) (0xc000433900) Create stream\nI1116 09:36:27.937189 1744 log.go:181] (0xc00053a000) (0xc000433900) Stream added, broadcasting: 1\nI1116 09:36:27.939064 1744 log.go:181] (0xc00053a000) Reply frame received for 1\nI1116 09:36:27.939109 1744 log.go:181] (0xc00053a000) (0xc00067a0a0) Create stream\nI1116 09:36:27.939120 1744 log.go:181] (0xc00053a000) (0xc00067a0a0) Stream added, broadcasting: 3\nI1116 09:36:27.940137 1744 log.go:181] (0xc00053a000) Reply frame received for 3\nI1116 09:36:27.940214 1744 log.go:181] (0xc00053a000) (0xc00017e000) Create stream\nI1116 09:36:27.940253 1744 log.go:181] (0xc00053a000) (0xc00017e000) Stream added, broadcasting: 5\nI1116 09:36:27.941421 1744 log.go:181] (0xc00053a000) Reply frame received for 5\nI1116 09:36:28.033087 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.033139 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.033155 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.033178 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.033191 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.033223 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.039845 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.039866 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.039876 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.040184 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.040214 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.040236 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.040247 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.040272 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.040292 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.047447 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.047459 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.047464 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.047908 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.047938 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.047955 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.047978 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.047991 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.048010 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.055372 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.055402 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.055435 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.056229 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.056273 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.056305 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.056322 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.056343 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.056357 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.061971 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.061992 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.062011 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.062825 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.062855 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.062882 1744 log.go:181] (0xc00017e000) (5) Data frame sent\nI1116 09:36:28.062900 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.062940 1744 log.go:181] (0xc00017e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.062990 1744 log.go:181] (0xc00017e000) (5) Data frame sent\nI1116 09:36:28.063025 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.063045 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.063067 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.069648 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.069677 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.069702 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.070475 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.070522 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.070534 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.070545 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.070554 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.070575 1744 log.go:181] (0xc00017e000) (5) Data frame sent\nI1116 09:36:28.070595 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.070603 1744 log.go:181] (0xc00017e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.070642 1744 log.go:181] (0xc00017e000) (5) Data frame sent\nI1116 09:36:28.075030 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.075070 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.075103 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.075457 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.075486 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.075504 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.075517 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.075523 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.075530 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.082029 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.082057 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.082069 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.082089 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.082098 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.082109 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.082118 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.082126 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.082143 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.089879 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.089911 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.089940 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.090553 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.090568 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.090579 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.090598 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.090614 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.090632 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.095777 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.095805 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.095833 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.096069 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.096098 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.096131 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.096188 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.096212 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.096229 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.099558 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.099580 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.099599 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.100250 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.100266 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.100279 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.100294 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.100304 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.100318 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.107610 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.107646 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.107685 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.108245 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.108263 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.108275 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.108287 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.108298 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.108316 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.112520 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.112540 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.112561 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.113106 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.113136 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.113174 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.113189 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.113215 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.113263 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.120623 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.120653 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.120672 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.121287 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.121310 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.121328 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.121358 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.121373 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.121385 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.125148 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.125188 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.125210 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.125839 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.125886 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.125913 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.125952 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.125974 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.126006 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.133427 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.133445 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.133460 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.134372 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.134401 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.134432 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.134447 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.134477 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.134501 1744 log.go:181] (0xc00017e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.138914 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.138930 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.138938 1744 log.go:181] (0xc00067a0a0) (3) Data frame sent\nI1116 09:36:28.139551 1744 log.go:181] (0xc00053a000) Data frame received for 3\nI1116 09:36:28.139583 1744 log.go:181] (0xc00067a0a0) (3) Data frame handling\nI1116 09:36:28.139616 1744 log.go:181] (0xc00053a000) Data frame received for 5\nI1116 09:36:28.139646 1744 log.go:181] (0xc00017e000) (5) Data frame handling\nI1116 09:36:28.145999 1744 log.go:181] (0xc00053a000) Data frame received for 1\nI1116 09:36:28.146024 1744 log.go:181] (0xc000433900) (1) Data frame handling\nI1116 09:36:28.146041 1744 log.go:181] (0xc000433900) (1) Data frame sent\nI1116 09:36:28.146066 1744 log.go:181] (0xc00053a000) (0xc000433900) Stream removed, broadcasting: 1\nI1116 09:36:28.146092 1744 log.go:181] (0xc00053a000) Go away received\nI1116 09:36:28.146449 1744 log.go:181] (0xc00053a000) (0xc000433900) Stream removed, broadcasting: 1\nI1116 09:36:28.146465 1744 log.go:181] (0xc00053a000) (0xc00067a0a0) Stream removed, broadcasting: 3\nI1116 09:36:28.146472 1744 log.go:181] (0xc00053a000) (0xc00017e000) Stream removed, broadcasting: 5\n" Nov 16 09:36:28.152: INFO: stdout: "\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-9ktw7\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-9ktw7\naffinity-clusterip-transition-9ktw7\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-9ktw7\naffinity-clusterip-transition-cm2bl\naffinity-clusterip-transition-9ktw7\naffinity-clusterip-transition-9ktw7\naffinity-clusterip-transition-9ktw7\naffinity-clusterip-transition-cm2bl" Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-9ktw7 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-9ktw7 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-9ktw7 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-9ktw7 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-9ktw7 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-9ktw7 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-9ktw7 Nov 16 09:36:28.152: INFO: Received response from host: affinity-clusterip-transition-cm2bl Nov 16 09:36:28.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-9744 execpod-affinityg6j5z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.102.22:80/ ; done' Nov 16 09:36:28.477: INFO: stderr: "I1116 09:36:28.301472 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cbe0) Create stream\nI1116 09:36:28.301517 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cbe0) Stream added, broadcasting: 1\nI1116 09:36:28.303550 1762 log.go:181] (0xc000b9f4a0) Reply frame received for 1\nI1116 09:36:28.303597 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cc80) Create stream\nI1116 09:36:28.303606 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cc80) Stream added, broadcasting: 3\nI1116 09:36:28.304363 1762 log.go:181] (0xc000b9f4a0) Reply frame received for 3\nI1116 09:36:28.304396 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cd20) Create stream\nI1116 09:36:28.304404 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cd20) Stream added, broadcasting: 5\nI1116 09:36:28.305311 1762 log.go:181] (0xc000b9f4a0) Reply frame received for 5\nI1116 09:36:28.365298 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.365342 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.365358 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.365386 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.365398 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.365410 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.368746 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.368769 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.368788 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.369884 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.369924 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.369943 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.369966 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.369979 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.370006 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.374453 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.374470 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.374481 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.375220 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.375256 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.375273 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\nI1116 09:36:28.375289 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.375300 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.375319 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.381374 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.381392 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.381400 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.381923 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.381955 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.381968 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.381988 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.382015 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.382034 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.385958 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.385983 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.386003 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.386793 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.386818 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.386844 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.386879 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.386896 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.386918 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.391280 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.391302 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.391321 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.392076 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.392100 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.392121 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.392152 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.392181 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.392201 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.398684 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.398703 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.398721 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.399537 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.399554 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.399563 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.399575 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.399583 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.399592 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.404181 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.404197 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.404234 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.405175 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.405191 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.405203 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.405254 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.405285 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.405305 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.410784 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.410804 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.410824 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.411681 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.411707 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.411727 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.411766 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.411790 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.411800 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\nI1116 09:36:28.416266 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.416288 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.416305 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.417177 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.417197 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.417209 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I1116 09:36:28.417317 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.417368 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.417384 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n http://10.105.102.22:80/\nI1116 09:36:28.417412 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.417436 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.417461 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.424054 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.424092 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.424117 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.424744 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.424764 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.424774 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.424793 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.424810 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.424827 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.431611 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.431627 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.431636 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.432488 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.432526 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.432554 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.432576 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.432593 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.432655 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.437807 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.437827 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.437839 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.438983 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.439004 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.439027 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.439062 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.439078 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.439096 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.443849 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.443889 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.443915 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.444581 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.444595 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.444604 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.444648 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.444683 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.444724 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.452478 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.452526 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.452561 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.453444 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.453480 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.453523 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\nI1116 09:36:28.453546 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.453560 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.453593 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.453622 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.453649 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.453674 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\nI1116 09:36:28.460615 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.460625 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.460632 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.461564 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.461604 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.461621 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.461643 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.461657 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.461673 1762 log.go:181] (0xc000c0cd20) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.102.22:80/\nI1116 09:36:28.465386 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.465421 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.465448 1762 log.go:181] (0xc000c0cc80) (3) Data frame sent\nI1116 09:36:28.466632 1762 log.go:181] (0xc000b9f4a0) Data frame received for 5\nI1116 09:36:28.466647 1762 log.go:181] (0xc000c0cd20) (5) Data frame handling\nI1116 09:36:28.466691 1762 log.go:181] (0xc000b9f4a0) Data frame received for 3\nI1116 09:36:28.466727 1762 log.go:181] (0xc000c0cc80) (3) Data frame handling\nI1116 09:36:28.468797 1762 log.go:181] (0xc000b9f4a0) Data frame received for 1\nI1116 09:36:28.468820 1762 log.go:181] (0xc000c0cbe0) (1) Data frame handling\nI1116 09:36:28.468831 1762 log.go:181] (0xc000c0cbe0) (1) Data frame sent\nI1116 09:36:28.468856 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cbe0) Stream removed, broadcasting: 1\nI1116 09:36:28.469117 1762 log.go:181] (0xc000b9f4a0) Go away received\nI1116 09:36:28.469229 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cbe0) Stream removed, broadcasting: 1\nI1116 09:36:28.469250 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cc80) Stream removed, broadcasting: 3\nI1116 09:36:28.469260 1762 log.go:181] (0xc000b9f4a0) (0xc000c0cd20) Stream removed, broadcasting: 5\n" Nov 16 09:36:28.477: INFO: stdout: "\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9\naffinity-clusterip-transition-lkxl9" Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Received response from host: affinity-clusterip-transition-lkxl9 Nov 16 09:36:28.477: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-9744, will wait for the garbage collector to delete the pods Nov 16 09:36:28.603: INFO: Deleting ReplicationController affinity-clusterip-transition took: 43.118917ms Nov 16 09:36:29.104: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.19225ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:36:35.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9744" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:55.651 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":106,"skipped":1963,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:36:35.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:36:36.560: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:36:38.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116196, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116196, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116196, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116196, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:36:41.906: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:36:42.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4102" for this suite. STEP: Destroying namespace "webhook-4102-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.393 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":107,"skipped":1995,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:36:42.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:36:46.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2133" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":2002,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:36:46.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9993/configmap-test-43f5784f-bd02-4278-937a-89c4f2de10e6 STEP: Creating a pod to test consume configMaps Nov 16 09:36:46.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-68e9e858-b37e-4d25-9e99-7cfe8c5d1c0d" in namespace "configmap-9993" to be "Succeeded or Failed" Nov 16 09:36:46.481: INFO: Pod "pod-configmaps-68e9e858-b37e-4d25-9e99-7cfe8c5d1c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.007027ms Nov 16 09:36:48.485: INFO: Pod "pod-configmaps-68e9e858-b37e-4d25-9e99-7cfe8c5d1c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015637052s Nov 16 09:36:50.489: INFO: Pod "pod-configmaps-68e9e858-b37e-4d25-9e99-7cfe8c5d1c0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019472735s STEP: Saw pod success Nov 16 09:36:50.489: INFO: Pod "pod-configmaps-68e9e858-b37e-4d25-9e99-7cfe8c5d1c0d" satisfied condition "Succeeded or Failed" Nov 16 09:36:50.492: INFO: Trying to get logs from node latest-worker pod pod-configmaps-68e9e858-b37e-4d25-9e99-7cfe8c5d1c0d container env-test: STEP: delete the pod Nov 16 09:36:50.524: INFO: Waiting for pod pod-configmaps-68e9e858-b37e-4d25-9e99-7cfe8c5d1c0d to disappear Nov 16 09:36:50.558: INFO: Pod pod-configmaps-68e9e858-b37e-4d25-9e99-7cfe8c5d1c0d no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:36:50.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9993" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":109,"skipped":2006,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:36:50.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-4ec74052-ad24-497c-a86b-d98d1372c0bb STEP: Creating a pod to test consume secrets Nov 16 09:36:50.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce" in namespace "projected-8993" to be "Succeeded or Failed" Nov 16 09:36:50.656: INFO: Pod "pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce": Phase="Pending", Reason="", readiness=false. Elapsed: 7.184102ms Nov 16 09:36:55.774: INFO: Pod "pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce": Phase="Pending", Reason="", readiness=false. Elapsed: 5.125412561s Nov 16 09:36:57.778: INFO: Pod "pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce": Phase="Running", Reason="", readiness=true. Elapsed: 7.129144795s Nov 16 09:36:59.782: INFO: Pod "pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.132976911s STEP: Saw pod success Nov 16 09:36:59.782: INFO: Pod "pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce" satisfied condition "Succeeded or Failed" Nov 16 09:36:59.785: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce container secret-volume-test: STEP: delete the pod Nov 16 09:36:59.822: INFO: Waiting for pod pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce to disappear Nov 16 09:36:59.834: INFO: Pod pod-projected-secrets-59ca6b30-27b9-49e2-80a5-12a58e34e5ce no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:36:59.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8993" for this suite. • [SLOW TEST:9.279 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":110,"skipped":2017,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:36:59.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ddb16fc6-3968-4e81-a75c-7f45f493d160 STEP: Creating a pod to test consume configMaps Nov 16 09:36:59.990: INFO: Waiting up to 5m0s for pod "pod-configmaps-6e6fab48-0e9f-4315-9b6f-4fbd4912496a" in namespace "configmap-4830" to be "Succeeded or Failed" Nov 16 09:37:00.002: INFO: Pod "pod-configmaps-6e6fab48-0e9f-4315-9b6f-4fbd4912496a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.293899ms Nov 16 09:37:02.006: INFO: Pod "pod-configmaps-6e6fab48-0e9f-4315-9b6f-4fbd4912496a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01538807s Nov 16 09:37:04.010: INFO: Pod "pod-configmaps-6e6fab48-0e9f-4315-9b6f-4fbd4912496a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019747223s STEP: Saw pod success Nov 16 09:37:04.010: INFO: Pod "pod-configmaps-6e6fab48-0e9f-4315-9b6f-4fbd4912496a" satisfied condition "Succeeded or Failed" Nov 16 09:37:04.013: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6e6fab48-0e9f-4315-9b6f-4fbd4912496a container configmap-volume-test: STEP: delete the pod Nov 16 09:37:04.075: INFO: Waiting for pod pod-configmaps-6e6fab48-0e9f-4315-9b6f-4fbd4912496a to disappear Nov 16 09:37:04.109: INFO: Pod pod-configmaps-6e6fab48-0e9f-4315-9b6f-4fbd4912496a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:37:04.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4830" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":2074,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:37:04.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 16 09:37:04.685: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 16 09:37:06.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116224, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116224, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116225, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116224, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 09:37:08.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116224, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116224, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116225, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116224, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:37:11.800: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:37:11.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:37:13.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7321" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.034 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":112,"skipped":2084,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:37:13.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:37:17.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-627" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":113,"skipped":2097,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:37:17.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8844 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 16 09:37:17.463: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 16 09:37:17.747: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 09:37:19.751: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 09:37:21.751: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 09:37:23.751: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 09:37:25.751: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 09:37:27.751: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 09:37:29.751: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 09:37:31.751: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 09:37:33.751: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 16 09:37:33.758: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 09:37:35.761: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 16 09:37:39.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.71:8080/dial?request=hostname&protocol=http&host=10.244.2.70&port=8080&tries=1'] Namespace:pod-network-test-8844 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:37:39.788: INFO: >>> kubeConfig: /root/.kube/config I1116 09:37:39.826451 7 log.go:181] (0xc0068d0fd0) (0xc001c83f40) Create stream I1116 09:37:39.826479 7 log.go:181] (0xc0068d0fd0) (0xc001c83f40) Stream added, broadcasting: 1 I1116 09:37:39.830679 7 log.go:181] (0xc0068d0fd0) Reply frame received for 1 I1116 09:37:39.830733 7 log.go:181] (0xc0068d0fd0) (0xc0037a63c0) Create stream I1116 09:37:39.830745 7 log.go:181] (0xc0068d0fd0) (0xc0037a63c0) Stream added, broadcasting: 3 I1116 09:37:39.831517 7 log.go:181] (0xc0068d0fd0) Reply frame received for 3 I1116 09:37:39.831551 7 log.go:181] (0xc0068d0fd0) (0xc0037a6460) Create stream I1116 09:37:39.831560 7 log.go:181] (0xc0068d0fd0) (0xc0037a6460) Stream added, broadcasting: 5 I1116 09:37:39.832533 7 log.go:181] (0xc0068d0fd0) Reply frame received for 5 I1116 09:37:39.893285 7 log.go:181] (0xc0068d0fd0) Data frame received for 3 I1116 09:37:39.893313 7 log.go:181] (0xc0037a63c0) (3) Data frame handling I1116 09:37:39.893335 7 log.go:181] (0xc0037a63c0) (3) Data frame sent I1116 09:37:39.893977 7 log.go:181] (0xc0068d0fd0) Data frame received for 5 I1116 09:37:39.894010 7 log.go:181] (0xc0037a6460) (5) Data frame handling I1116 09:37:39.894237 7 log.go:181] (0xc0068d0fd0) Data frame received for 3 I1116 09:37:39.894272 7 log.go:181] (0xc0037a63c0) (3) Data frame handling I1116 09:37:39.896011 7 log.go:181] (0xc0068d0fd0) Data frame received for 1 I1116 09:37:39.896041 7 log.go:181] (0xc001c83f40) (1) Data frame handling I1116 09:37:39.896066 7 log.go:181] (0xc001c83f40) (1) Data frame sent I1116 09:37:39.896088 7 log.go:181] (0xc0068d0fd0) (0xc001c83f40) Stream removed, broadcasting: 1 I1116 09:37:39.896465 7 log.go:181] (0xc0068d0fd0) (0xc001c83f40) Stream removed, broadcasting: 1 I1116 09:37:39.896485 7 log.go:181] (0xc0068d0fd0) (0xc0037a63c0) Stream removed, broadcasting: 3 I1116 09:37:39.896712 7 log.go:181] (0xc0068d0fd0) Go away received I1116 09:37:39.896746 7 log.go:181] (0xc0068d0fd0) (0xc0037a6460) Stream removed, broadcasting: 5 Nov 16 09:37:39.896: INFO: Waiting for responses: map[] Nov 16 09:37:39.900: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.71:8080/dial?request=hostname&protocol=http&host=10.244.1.227&port=8080&tries=1'] Namespace:pod-network-test-8844 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:37:39.900: INFO: >>> kubeConfig: /root/.kube/config I1116 09:37:39.929994 7 log.go:181] (0xc0012908f0) (0xc0037a6960) Create stream I1116 09:37:39.930019 7 log.go:181] (0xc0012908f0) (0xc0037a6960) Stream added, broadcasting: 1 I1116 09:37:39.931820 7 log.go:181] (0xc0012908f0) Reply frame received for 1 I1116 09:37:39.931853 7 log.go:181] (0xc0012908f0) (0xc005074500) Create stream I1116 09:37:39.931867 7 log.go:181] (0xc0012908f0) (0xc005074500) Stream added, broadcasting: 3 I1116 09:37:39.932628 7 log.go:181] (0xc0012908f0) Reply frame received for 3 I1116 09:37:39.932671 7 log.go:181] (0xc0012908f0) (0xc001628000) Create stream I1116 09:37:39.932688 7 log.go:181] (0xc0012908f0) (0xc001628000) Stream added, broadcasting: 5 I1116 09:37:39.933726 7 log.go:181] (0xc0012908f0) Reply frame received for 5 I1116 09:37:40.001079 7 log.go:181] (0xc0012908f0) Data frame received for 3 I1116 09:37:40.001118 7 log.go:181] (0xc005074500) (3) Data frame handling I1116 09:37:40.001140 7 log.go:181] (0xc005074500) (3) Data frame sent I1116 09:37:40.001617 7 log.go:181] (0xc0012908f0) Data frame received for 3 I1116 09:37:40.001642 7 log.go:181] (0xc005074500) (3) Data frame handling I1116 09:37:40.001659 7 log.go:181] (0xc0012908f0) Data frame received for 5 I1116 09:37:40.001669 7 log.go:181] (0xc001628000) (5) Data frame handling I1116 09:37:40.003549 7 log.go:181] (0xc0012908f0) Data frame received for 1 I1116 09:37:40.003606 7 log.go:181] (0xc0037a6960) (1) Data frame handling I1116 09:37:40.003629 7 log.go:181] (0xc0037a6960) (1) Data frame sent I1116 09:37:40.003646 7 log.go:181] (0xc0012908f0) (0xc0037a6960) Stream removed, broadcasting: 1 I1116 09:37:40.003662 7 log.go:181] (0xc0012908f0) Go away received I1116 09:37:40.003834 7 log.go:181] (0xc0012908f0) (0xc0037a6960) Stream removed, broadcasting: 1 I1116 09:37:40.003878 7 log.go:181] (0xc0012908f0) (0xc005074500) Stream removed, broadcasting: 3 I1116 09:37:40.003913 7 log.go:181] (0xc0012908f0) (0xc001628000) Stream removed, broadcasting: 5 Nov 16 09:37:40.003: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:37:40.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8844" for this suite. • [SLOW TEST:22.647 seconds] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":114,"skipped":2108,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:37:40.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Nov 16 09:37:40.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config cluster-info' Nov 16 09:37:40.228: INFO: stderr: "" Nov 16 09:37:40.228: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34323\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34323/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:37:40.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8329" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":115,"skipped":2122,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:37:40.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Nov 16 09:37:40.331: INFO: Waiting up to 5m0s for pod "client-containers-c69cbe63-e52a-4555-bea3-2716928fa24f" in namespace "containers-6477" to be "Succeeded or Failed" Nov 16 09:37:40.334: INFO: Pod "client-containers-c69cbe63-e52a-4555-bea3-2716928fa24f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.720603ms Nov 16 09:37:42.340: INFO: Pod "client-containers-c69cbe63-e52a-4555-bea3-2716928fa24f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008171684s Nov 16 09:37:44.344: INFO: Pod "client-containers-c69cbe63-e52a-4555-bea3-2716928fa24f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013042274s STEP: Saw pod success Nov 16 09:37:44.345: INFO: Pod "client-containers-c69cbe63-e52a-4555-bea3-2716928fa24f" satisfied condition "Succeeded or Failed" Nov 16 09:37:44.348: INFO: Trying to get logs from node latest-worker pod client-containers-c69cbe63-e52a-4555-bea3-2716928fa24f container test-container: STEP: delete the pod Nov 16 09:37:44.399: INFO: Waiting for pod client-containers-c69cbe63-e52a-4555-bea3-2716928fa24f to disappear Nov 16 09:37:44.413: INFO: Pod client-containers-c69cbe63-e52a-4555-bea3-2716928fa24f no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:37:44.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6477" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":116,"skipped":2129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:37:44.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:37:44.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2" in namespace "projected-4703" to be "Succeeded or Failed" Nov 16 09:37:44.559: INFO: Pod "downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.980472ms Nov 16 09:37:47.938: INFO: Pod "downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.397804728s Nov 16 09:37:49.941: INFO: Pod "downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.401399606s Nov 16 09:37:51.947: INFO: Pod "downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.406944943s STEP: Saw pod success Nov 16 09:37:51.947: INFO: Pod "downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2" satisfied condition "Succeeded or Failed" Nov 16 09:37:51.950: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2 container client-container: STEP: delete the pod Nov 16 09:37:51.965: INFO: Waiting for pod downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2 to disappear Nov 16 09:37:52.034: INFO: Pod downwardapi-volume-365879fa-f9cf-436f-80ed-44f71bd4fec2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:37:52.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4703" for this suite. • [SLOW TEST:7.622 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":117,"skipped":2165,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:37:52.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-1a10cca2-4b2c-4494-a1c1-d3c6234c069d STEP: Creating a pod to test consume configMaps Nov 16 09:37:52.115: INFO: Waiting up to 5m0s for pod "pod-configmaps-7afb8976-b482-4207-b254-b479f9ce25c9" in namespace "configmap-4543" to be "Succeeded or Failed" Nov 16 09:37:52.164: INFO: Pod "pod-configmaps-7afb8976-b482-4207-b254-b479f9ce25c9": Phase="Pending", Reason="", readiness=false. Elapsed: 48.702452ms Nov 16 09:37:54.414: INFO: Pod "pod-configmaps-7afb8976-b482-4207-b254-b479f9ce25c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298953019s Nov 16 09:37:56.418: INFO: Pod "pod-configmaps-7afb8976-b482-4207-b254-b479f9ce25c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.303157815s STEP: Saw pod success Nov 16 09:37:56.419: INFO: Pod "pod-configmaps-7afb8976-b482-4207-b254-b479f9ce25c9" satisfied condition "Succeeded or Failed" Nov 16 09:37:56.421: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7afb8976-b482-4207-b254-b479f9ce25c9 container configmap-volume-test: STEP: delete the pod Nov 16 09:37:56.682: INFO: Waiting for pod pod-configmaps-7afb8976-b482-4207-b254-b479f9ce25c9 to disappear Nov 16 09:37:56.757: INFO: Pod pod-configmaps-7afb8976-b482-4207-b254-b479f9ce25c9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:37:56.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4543" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":2172,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:37:56.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:37:56.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d1f41cd-821a-4635-a6b7-64d342a173fe" in namespace "projected-4494" to be "Succeeded or Failed" Nov 16 09:37:56.909: INFO: Pod "downwardapi-volume-2d1f41cd-821a-4635-a6b7-64d342a173fe": Phase="Pending", Reason="", readiness=false. Elapsed: 31.390637ms Nov 16 09:37:58.919: INFO: Pod "downwardapi-volume-2d1f41cd-821a-4635-a6b7-64d342a173fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040684027s Nov 16 09:38:00.923: INFO: Pod "downwardapi-volume-2d1f41cd-821a-4635-a6b7-64d342a173fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044802661s STEP: Saw pod success Nov 16 09:38:00.923: INFO: Pod "downwardapi-volume-2d1f41cd-821a-4635-a6b7-64d342a173fe" satisfied condition "Succeeded or Failed" Nov 16 09:38:00.925: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2d1f41cd-821a-4635-a6b7-64d342a173fe container client-container: STEP: delete the pod Nov 16 09:38:00.986: INFO: Waiting for pod downwardapi-volume-2d1f41cd-821a-4635-a6b7-64d342a173fe to disappear Nov 16 09:38:01.011: INFO: Pod downwardapi-volume-2d1f41cd-821a-4635-a6b7-64d342a173fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:38:01.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4494" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":119,"skipped":2177,"failed":0} SSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:38:01.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:38:01.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-926" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":120,"skipped":2180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:38:01.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 16 09:38:01.305: INFO: Waiting up to 5m0s for pod "pod-c2e57ed7-0361-43ae-8a47-d42913a90f8d" in namespace "emptydir-2180" to be "Succeeded or Failed" Nov 16 09:38:01.315: INFO: Pod "pod-c2e57ed7-0361-43ae-8a47-d42913a90f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.791461ms Nov 16 09:38:03.319: INFO: Pod "pod-c2e57ed7-0361-43ae-8a47-d42913a90f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013685107s Nov 16 09:38:05.323: INFO: Pod "pod-c2e57ed7-0361-43ae-8a47-d42913a90f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017474795s STEP: Saw pod success Nov 16 09:38:05.323: INFO: Pod "pod-c2e57ed7-0361-43ae-8a47-d42913a90f8d" satisfied condition "Succeeded or Failed" Nov 16 09:38:05.325: INFO: Trying to get logs from node latest-worker pod pod-c2e57ed7-0361-43ae-8a47-d42913a90f8d container test-container: STEP: delete the pod Nov 16 09:38:05.368: INFO: Waiting for pod pod-c2e57ed7-0361-43ae-8a47-d42913a90f8d to disappear Nov 16 09:38:05.396: INFO: Pod pod-c2e57ed7-0361-43ae-8a47-d42913a90f8d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:38:05.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2180" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":121,"skipped":2219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:38:05.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 16 09:38:05.665: INFO: Waiting up to 5m0s for pod "downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77" in namespace "downward-api-6272" to be "Succeeded or Failed" Nov 16 09:38:05.690: INFO: Pod "downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77": Phase="Pending", Reason="", readiness=false. Elapsed: 24.477835ms Nov 16 09:38:07.752: INFO: Pod "downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087359729s Nov 16 09:38:09.756: INFO: Pod "downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090459521s Nov 16 09:38:11.760: INFO: Pod "downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094419944s STEP: Saw pod success Nov 16 09:38:11.760: INFO: Pod "downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77" satisfied condition "Succeeded or Failed" Nov 16 09:38:11.763: INFO: Trying to get logs from node latest-worker pod downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77 container dapi-container: STEP: delete the pod Nov 16 09:38:11.780: INFO: Waiting for pod downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77 to disappear Nov 16 09:38:11.785: INFO: Pod downward-api-b63e9450-b7ed-4517-b5e2-423503e76d77 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:38:11.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6272" for this suite. • [SLOW TEST:6.387 seconds] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":122,"skipped":2251,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:38:11.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Nov 16 09:38:11.993: INFO: Waiting up to 1m0s for all nodes to be ready Nov 16 09:39:12.011: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Nov 16 09:39:12.081: INFO: Created pod: pod0-sched-preemption-low-priority Nov 16 09:39:12.141: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:39:40.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3236" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:88.496 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":123,"skipped":2268,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:39:40.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:39:40.717: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:39:42.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116380, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116380, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116380, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116380, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:39:45.788: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:39:45.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:39:47.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9204" for this suite. STEP: Destroying namespace "webhook-9204-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.866 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":124,"skipped":2274,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:39:47.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5818.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5818.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5818.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5818.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5818.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5818.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 09:39:53.411: INFO: DNS probes using dns-5818/dns-test-74262890-7c43-4676-b31d-b4e31fbe9f41 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:39:53.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5818" for this suite. • [SLOW TEST:6.304 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":125,"skipped":2288,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:39:53.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-v92l STEP: Creating a pod to test atomic-volume-subpath Nov 16 09:39:53.594: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-v92l" in namespace "subpath-8851" to be "Succeeded or Failed" Nov 16 09:39:53.968: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Pending", Reason="", readiness=false. Elapsed: 374.211525ms Nov 16 09:39:55.972: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377597004s Nov 16 09:39:57.984: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 4.389879295s Nov 16 09:39:59.989: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 6.39480785s Nov 16 09:40:01.994: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 8.399571411s Nov 16 09:40:03.998: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 10.403901547s Nov 16 09:40:06.003: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 12.408731428s Nov 16 09:40:08.008: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 14.413499624s Nov 16 09:40:10.011: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 16.417097616s Nov 16 09:40:12.016: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 18.422130513s Nov 16 09:40:14.021: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 20.427002227s Nov 16 09:40:16.026: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 22.432060676s Nov 16 09:40:18.030: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Running", Reason="", readiness=true. Elapsed: 24.436462507s Nov 16 09:40:20.034: INFO: Pod "pod-subpath-test-configmap-v92l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.439976044s STEP: Saw pod success Nov 16 09:40:20.034: INFO: Pod "pod-subpath-test-configmap-v92l" satisfied condition "Succeeded or Failed" Nov 16 09:40:20.037: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-v92l container test-container-subpath-configmap-v92l: STEP: delete the pod Nov 16 09:40:20.087: INFO: Waiting for pod pod-subpath-test-configmap-v92l to disappear Nov 16 09:40:20.097: INFO: Pod pod-subpath-test-configmap-v92l no longer exists STEP: Deleting pod pod-subpath-test-configmap-v92l Nov 16 09:40:20.097: INFO: Deleting pod "pod-subpath-test-configmap-v92l" in namespace "subpath-8851" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:40:20.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8851" for this suite. • [SLOW TEST:26.648 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":126,"skipped":2292,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:40:20.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:40:24.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8456" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":127,"skipped":2306,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:40:24.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6780.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6780.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6780.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6780.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6780.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6780.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 09:40:34.513: INFO: DNS probes using dns-6780/dns-test-703bac12-4b32-4cda-a71c-1de75d8f7935 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:40:34.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6780" for this suite. • [SLOW TEST:10.702 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":128,"skipped":2312,"failed":0} SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:40:34.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Nov 16 09:40:45.170: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:45.170: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:45.200126 7 log.go:181] (0xc0068d1600) (0xc00309ee60) Create stream I1116 09:40:45.200157 7 log.go:181] (0xc0068d1600) (0xc00309ee60) Stream added, broadcasting: 1 I1116 09:40:45.202601 7 log.go:181] (0xc0068d1600) Reply frame received for 1 I1116 09:40:45.202639 7 log.go:181] (0xc0068d1600) (0xc004034140) Create stream I1116 09:40:45.202647 7 log.go:181] (0xc0068d1600) (0xc004034140) Stream added, broadcasting: 3 I1116 09:40:45.203620 7 log.go:181] (0xc0068d1600) Reply frame received for 3 I1116 09:40:45.203666 7 log.go:181] (0xc0068d1600) (0xc003e9e000) Create stream I1116 09:40:45.203685 7 log.go:181] (0xc0068d1600) (0xc003e9e000) Stream added, broadcasting: 5 I1116 09:40:45.204639 7 log.go:181] (0xc0068d1600) Reply frame received for 5 I1116 09:40:45.298228 7 log.go:181] (0xc0068d1600) Data frame received for 5 I1116 09:40:45.298266 7 log.go:181] (0xc003e9e000) (5) Data frame handling I1116 09:40:45.298309 7 log.go:181] (0xc0068d1600) Data frame received for 3 I1116 09:40:45.298367 7 log.go:181] (0xc004034140) (3) Data frame handling I1116 09:40:45.298405 7 log.go:181] (0xc004034140) (3) Data frame sent I1116 09:40:45.298437 7 log.go:181] (0xc0068d1600) Data frame received for 3 I1116 09:40:45.298465 7 log.go:181] (0xc004034140) (3) Data frame handling I1116 09:40:45.300336 7 log.go:181] (0xc0068d1600) Data frame received for 1 I1116 09:40:45.300382 7 log.go:181] (0xc00309ee60) (1) Data frame handling I1116 09:40:45.300423 7 log.go:181] (0xc00309ee60) (1) Data frame sent I1116 09:40:45.300451 7 log.go:181] (0xc0068d1600) (0xc00309ee60) Stream removed, broadcasting: 1 I1116 09:40:45.300481 7 log.go:181] (0xc0068d1600) Go away received I1116 09:40:45.300581 7 log.go:181] (0xc0068d1600) (0xc00309ee60) Stream removed, broadcasting: 1 I1116 09:40:45.300610 7 log.go:181] (0xc0068d1600) (0xc004034140) Stream removed, broadcasting: 3 I1116 09:40:45.300639 7 log.go:181] (0xc0068d1600) (0xc003e9e000) Stream removed, broadcasting: 5 Nov 16 09:40:45.300: INFO: Exec stderr: "" Nov 16 09:40:45.300: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:45.300: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:45.334376 7 log.go:181] (0xc0068d1ce0) (0xc00309f180) Create stream I1116 09:40:45.334400 7 log.go:181] (0xc0068d1ce0) (0xc00309f180) Stream added, broadcasting: 1 I1116 09:40:45.336162 7 log.go:181] (0xc0068d1ce0) Reply frame received for 1 I1116 09:40:45.336198 7 log.go:181] (0xc0068d1ce0) (0xc00309f220) Create stream I1116 09:40:45.336212 7 log.go:181] (0xc0068d1ce0) (0xc00309f220) Stream added, broadcasting: 3 I1116 09:40:45.337395 7 log.go:181] (0xc0068d1ce0) Reply frame received for 3 I1116 09:40:45.337423 7 log.go:181] (0xc0068d1ce0) (0xc00309f2c0) Create stream I1116 09:40:45.337437 7 log.go:181] (0xc0068d1ce0) (0xc00309f2c0) Stream added, broadcasting: 5 I1116 09:40:45.338244 7 log.go:181] (0xc0068d1ce0) Reply frame received for 5 I1116 09:40:45.408421 7 log.go:181] (0xc0068d1ce0) Data frame received for 3 I1116 09:40:45.408453 7 log.go:181] (0xc00309f220) (3) Data frame handling I1116 09:40:45.408461 7 log.go:181] (0xc00309f220) (3) Data frame sent I1116 09:40:45.408468 7 log.go:181] (0xc0068d1ce0) Data frame received for 3 I1116 09:40:45.408472 7 log.go:181] (0xc00309f220) (3) Data frame handling I1116 09:40:45.408492 7 log.go:181] (0xc0068d1ce0) Data frame received for 5 I1116 09:40:45.408505 7 log.go:181] (0xc00309f2c0) (5) Data frame handling I1116 09:40:45.410308 7 log.go:181] (0xc0068d1ce0) Data frame received for 1 I1116 09:40:45.410350 7 log.go:181] (0xc00309f180) (1) Data frame handling I1116 09:40:45.410440 7 log.go:181] (0xc00309f180) (1) Data frame sent I1116 09:40:45.410594 7 log.go:181] (0xc0068d1ce0) (0xc00309f180) Stream removed, broadcasting: 1 I1116 09:40:45.410724 7 log.go:181] (0xc0068d1ce0) (0xc00309f180) Stream removed, broadcasting: 1 I1116 09:40:45.410757 7 log.go:181] (0xc0068d1ce0) (0xc00309f220) Stream removed, broadcasting: 3 I1116 09:40:45.410992 7 log.go:181] (0xc0068d1ce0) (0xc00309f2c0) Stream removed, broadcasting: 5 Nov 16 09:40:45.411: INFO: Exec stderr: "" Nov 16 09:40:45.411: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:45.411: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:45.411351 7 log.go:181] (0xc0068d1ce0) Go away received I1116 09:40:45.478769 7 log.go:181] (0xc0022fc000) (0xc00309f4a0) Create stream I1116 09:40:45.478821 7 log.go:181] (0xc0022fc000) (0xc00309f4a0) Stream added, broadcasting: 1 I1116 09:40:45.483365 7 log.go:181] (0xc0022fc000) Reply frame received for 1 I1116 09:40:45.483402 7 log.go:181] (0xc0022fc000) (0xc0040341e0) Create stream I1116 09:40:45.483447 7 log.go:181] (0xc0022fc000) (0xc0040341e0) Stream added, broadcasting: 3 I1116 09:40:45.485129 7 log.go:181] (0xc0022fc000) Reply frame received for 3 I1116 09:40:45.485168 7 log.go:181] (0xc0022fc000) (0xc003e9e140) Create stream I1116 09:40:45.485193 7 log.go:181] (0xc0022fc000) (0xc003e9e140) Stream added, broadcasting: 5 I1116 09:40:45.486211 7 log.go:181] (0xc0022fc000) Reply frame received for 5 I1116 09:40:45.561393 7 log.go:181] (0xc0022fc000) Data frame received for 3 I1116 09:40:45.561440 7 log.go:181] (0xc0040341e0) (3) Data frame handling I1116 09:40:45.561464 7 log.go:181] (0xc0040341e0) (3) Data frame sent I1116 09:40:45.561492 7 log.go:181] (0xc0022fc000) Data frame received for 3 I1116 09:40:45.561509 7 log.go:181] (0xc0040341e0) (3) Data frame handling I1116 09:40:45.561549 7 log.go:181] (0xc0022fc000) Data frame received for 5 I1116 09:40:45.561581 7 log.go:181] (0xc003e9e140) (5) Data frame handling I1116 09:40:45.562968 7 log.go:181] (0xc0022fc000) Data frame received for 1 I1116 09:40:45.563042 7 log.go:181] (0xc00309f4a0) (1) Data frame handling I1116 09:40:45.563089 7 log.go:181] (0xc00309f4a0) (1) Data frame sent I1116 09:40:45.563232 7 log.go:181] (0xc0022fc000) (0xc00309f4a0) Stream removed, broadcasting: 1 I1116 09:40:45.563290 7 log.go:181] (0xc0022fc000) Go away received I1116 09:40:45.563366 7 log.go:181] (0xc0022fc000) (0xc00309f4a0) Stream removed, broadcasting: 1 I1116 09:40:45.563404 7 log.go:181] (0xc0022fc000) (0xc0040341e0) Stream removed, broadcasting: 3 I1116 09:40:45.563424 7 log.go:181] (0xc0022fc000) (0xc003e9e140) Stream removed, broadcasting: 5 Nov 16 09:40:45.563: INFO: Exec stderr: "" Nov 16 09:40:45.563: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:45.563: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:45.591732 7 log.go:181] (0xc0001f0370) (0xc003e9e3c0) Create stream I1116 09:40:45.591787 7 log.go:181] (0xc0001f0370) (0xc003e9e3c0) Stream added, broadcasting: 1 I1116 09:40:45.593698 7 log.go:181] (0xc0001f0370) Reply frame received for 1 I1116 09:40:45.593730 7 log.go:181] (0xc0001f0370) (0xc001028e60) Create stream I1116 09:40:45.593740 7 log.go:181] (0xc0001f0370) (0xc001028e60) Stream added, broadcasting: 3 I1116 09:40:45.594746 7 log.go:181] (0xc0001f0370) Reply frame received for 3 I1116 09:40:45.594775 7 log.go:181] (0xc0001f0370) (0xc001e36fa0) Create stream I1116 09:40:45.594787 7 log.go:181] (0xc0001f0370) (0xc001e36fa0) Stream added, broadcasting: 5 I1116 09:40:45.595557 7 log.go:181] (0xc0001f0370) Reply frame received for 5 I1116 09:40:45.663195 7 log.go:181] (0xc0001f0370) Data frame received for 3 I1116 09:40:45.663227 7 log.go:181] (0xc001028e60) (3) Data frame handling I1116 09:40:45.663238 7 log.go:181] (0xc001028e60) (3) Data frame sent I1116 09:40:45.663245 7 log.go:181] (0xc0001f0370) Data frame received for 3 I1116 09:40:45.663251 7 log.go:181] (0xc001028e60) (3) Data frame handling I1116 09:40:45.663269 7 log.go:181] (0xc0001f0370) Data frame received for 5 I1116 09:40:45.663276 7 log.go:181] (0xc001e36fa0) (5) Data frame handling I1116 09:40:45.665461 7 log.go:181] (0xc0001f0370) Data frame received for 1 I1116 09:40:45.665484 7 log.go:181] (0xc003e9e3c0) (1) Data frame handling I1116 09:40:45.665497 7 log.go:181] (0xc003e9e3c0) (1) Data frame sent I1116 09:40:45.665508 7 log.go:181] (0xc0001f0370) (0xc003e9e3c0) Stream removed, broadcasting: 1 I1116 09:40:45.665520 7 log.go:181] (0xc0001f0370) Go away received I1116 09:40:45.665696 7 log.go:181] (0xc0001f0370) (0xc003e9e3c0) Stream removed, broadcasting: 1 I1116 09:40:45.665722 7 log.go:181] (0xc0001f0370) (0xc001028e60) Stream removed, broadcasting: 3 I1116 09:40:45.665740 7 log.go:181] (0xc0001f0370) (0xc001e36fa0) Stream removed, broadcasting: 5 Nov 16 09:40:45.665: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Nov 16 09:40:45.665: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:45.665: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:45.696238 7 log.go:181] (0xc00003bce0) (0xc0010294a0) Create stream I1116 09:40:45.696300 7 log.go:181] (0xc00003bce0) (0xc0010294a0) Stream added, broadcasting: 1 I1116 09:40:45.698209 7 log.go:181] (0xc00003bce0) Reply frame received for 1 I1116 09:40:45.698248 7 log.go:181] (0xc00003bce0) (0xc004034280) Create stream I1116 09:40:45.698263 7 log.go:181] (0xc00003bce0) (0xc004034280) Stream added, broadcasting: 3 I1116 09:40:45.699191 7 log.go:181] (0xc00003bce0) Reply frame received for 3 I1116 09:40:45.699218 7 log.go:181] (0xc00003bce0) (0xc004034320) Create stream I1116 09:40:45.699228 7 log.go:181] (0xc00003bce0) (0xc004034320) Stream added, broadcasting: 5 I1116 09:40:45.700049 7 log.go:181] (0xc00003bce0) Reply frame received for 5 I1116 09:40:45.775182 7 log.go:181] (0xc00003bce0) Data frame received for 5 I1116 09:40:45.775205 7 log.go:181] (0xc004034320) (5) Data frame handling I1116 09:40:45.775237 7 log.go:181] (0xc00003bce0) Data frame received for 3 I1116 09:40:45.775248 7 log.go:181] (0xc004034280) (3) Data frame handling I1116 09:40:45.775256 7 log.go:181] (0xc004034280) (3) Data frame sent I1116 09:40:45.775263 7 log.go:181] (0xc00003bce0) Data frame received for 3 I1116 09:40:45.775274 7 log.go:181] (0xc004034280) (3) Data frame handling I1116 09:40:45.776737 7 log.go:181] (0xc00003bce0) Data frame received for 1 I1116 09:40:45.776771 7 log.go:181] (0xc0010294a0) (1) Data frame handling I1116 09:40:45.776790 7 log.go:181] (0xc0010294a0) (1) Data frame sent I1116 09:40:45.776808 7 log.go:181] (0xc00003bce0) (0xc0010294a0) Stream removed, broadcasting: 1 I1116 09:40:45.776827 7 log.go:181] (0xc00003bce0) Go away received I1116 09:40:45.777031 7 log.go:181] (0xc00003bce0) (0xc0010294a0) Stream removed, broadcasting: 1 I1116 09:40:45.777061 7 log.go:181] (0xc00003bce0) (0xc004034280) Stream removed, broadcasting: 3 I1116 09:40:45.777082 7 log.go:181] (0xc00003bce0) (0xc004034320) Stream removed, broadcasting: 5 Nov 16 09:40:45.777: INFO: Exec stderr: "" Nov 16 09:40:45.777: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:45.777: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:45.806401 7 log.go:181] (0xc0028d88f0) (0xc004034fa0) Create stream I1116 09:40:45.806428 7 log.go:181] (0xc0028d88f0) (0xc004034fa0) Stream added, broadcasting: 1 I1116 09:40:45.808319 7 log.go:181] (0xc0028d88f0) Reply frame received for 1 I1116 09:40:45.808350 7 log.go:181] (0xc0028d88f0) (0xc003e9e460) Create stream I1116 09:40:45.808361 7 log.go:181] (0xc0028d88f0) (0xc003e9e460) Stream added, broadcasting: 3 I1116 09:40:45.809461 7 log.go:181] (0xc0028d88f0) Reply frame received for 3 I1116 09:40:45.809501 7 log.go:181] (0xc0028d88f0) (0xc001e37040) Create stream I1116 09:40:45.809520 7 log.go:181] (0xc0028d88f0) (0xc001e37040) Stream added, broadcasting: 5 I1116 09:40:45.810423 7 log.go:181] (0xc0028d88f0) Reply frame received for 5 I1116 09:40:45.888765 7 log.go:181] (0xc0028d88f0) Data frame received for 5 I1116 09:40:45.888800 7 log.go:181] (0xc001e37040) (5) Data frame handling I1116 09:40:45.888921 7 log.go:181] (0xc0028d88f0) Data frame received for 3 I1116 09:40:45.888966 7 log.go:181] (0xc003e9e460) (3) Data frame handling I1116 09:40:45.888996 7 log.go:181] (0xc003e9e460) (3) Data frame sent I1116 09:40:45.889155 7 log.go:181] (0xc0028d88f0) Data frame received for 3 I1116 09:40:45.889192 7 log.go:181] (0xc003e9e460) (3) Data frame handling I1116 09:40:45.890657 7 log.go:181] (0xc0028d88f0) Data frame received for 1 I1116 09:40:45.890680 7 log.go:181] (0xc004034fa0) (1) Data frame handling I1116 09:40:45.890714 7 log.go:181] (0xc004034fa0) (1) Data frame sent I1116 09:40:45.890737 7 log.go:181] (0xc0028d88f0) (0xc004034fa0) Stream removed, broadcasting: 1 I1116 09:40:45.890796 7 log.go:181] (0xc0028d88f0) Go away received I1116 09:40:45.890841 7 log.go:181] (0xc0028d88f0) (0xc004034fa0) Stream removed, broadcasting: 1 I1116 09:40:45.890870 7 log.go:181] (0xc0028d88f0) (0xc003e9e460) Stream removed, broadcasting: 3 I1116 09:40:45.890898 7 log.go:181] (0xc0028d88f0) (0xc001e37040) Stream removed, broadcasting: 5 Nov 16 09:40:45.890: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Nov 16 09:40:45.890: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:45.891: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:45.927176 7 log.go:181] (0xc0001f0d10) (0xc003e9e8c0) Create stream I1116 09:40:45.927208 7 log.go:181] (0xc0001f0d10) (0xc003e9e8c0) Stream added, broadcasting: 1 I1116 09:40:45.929634 7 log.go:181] (0xc0001f0d10) Reply frame received for 1 I1116 09:40:45.929668 7 log.go:181] (0xc0001f0d10) (0xc0010295e0) Create stream I1116 09:40:45.929680 7 log.go:181] (0xc0001f0d10) (0xc0010295e0) Stream added, broadcasting: 3 I1116 09:40:45.930558 7 log.go:181] (0xc0001f0d10) Reply frame received for 3 I1116 09:40:45.930590 7 log.go:181] (0xc0001f0d10) (0xc001e37360) Create stream I1116 09:40:45.930604 7 log.go:181] (0xc0001f0d10) (0xc001e37360) Stream added, broadcasting: 5 I1116 09:40:45.931370 7 log.go:181] (0xc0001f0d10) Reply frame received for 5 I1116 09:40:46.003644 7 log.go:181] (0xc0001f0d10) Data frame received for 5 I1116 09:40:46.003685 7 log.go:181] (0xc001e37360) (5) Data frame handling I1116 09:40:46.003719 7 log.go:181] (0xc0001f0d10) Data frame received for 3 I1116 09:40:46.003733 7 log.go:181] (0xc0010295e0) (3) Data frame handling I1116 09:40:46.003753 7 log.go:181] (0xc0010295e0) (3) Data frame sent I1116 09:40:46.003768 7 log.go:181] (0xc0001f0d10) Data frame received for 3 I1116 09:40:46.003798 7 log.go:181] (0xc0010295e0) (3) Data frame handling I1116 09:40:46.008922 7 log.go:181] (0xc0001f0d10) Data frame received for 1 I1116 09:40:46.008937 7 log.go:181] (0xc003e9e8c0) (1) Data frame handling I1116 09:40:46.008967 7 log.go:181] (0xc003e9e8c0) (1) Data frame sent I1116 09:40:46.009011 7 log.go:181] (0xc0001f0d10) (0xc003e9e8c0) Stream removed, broadcasting: 1 I1116 09:40:46.009023 7 log.go:181] (0xc0001f0d10) Go away received I1116 09:40:46.009157 7 log.go:181] (0xc0001f0d10) (0xc003e9e8c0) Stream removed, broadcasting: 1 I1116 09:40:46.009190 7 log.go:181] (0xc0001f0d10) (0xc0010295e0) Stream removed, broadcasting: 3 I1116 09:40:46.009212 7 log.go:181] (0xc0001f0d10) (0xc001e37360) Stream removed, broadcasting: 5 Nov 16 09:40:46.009: INFO: Exec stderr: "" Nov 16 09:40:46.009: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:46.009: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:46.034934 7 log.go:181] (0xc0001424d0) (0xc003e9eb40) Create stream I1116 09:40:46.034967 7 log.go:181] (0xc0001424d0) (0xc003e9eb40) Stream added, broadcasting: 1 I1116 09:40:46.037128 7 log.go:181] (0xc0001424d0) Reply frame received for 1 I1116 09:40:46.037184 7 log.go:181] (0xc0001424d0) (0xc003e9ebe0) Create stream I1116 09:40:46.037204 7 log.go:181] (0xc0001424d0) (0xc003e9ebe0) Stream added, broadcasting: 3 I1116 09:40:46.038251 7 log.go:181] (0xc0001424d0) Reply frame received for 3 I1116 09:40:46.038288 7 log.go:181] (0xc0001424d0) (0xc003e9ec80) Create stream I1116 09:40:46.038300 7 log.go:181] (0xc0001424d0) (0xc003e9ec80) Stream added, broadcasting: 5 I1116 09:40:46.039107 7 log.go:181] (0xc0001424d0) Reply frame received for 5 I1116 09:40:46.099119 7 log.go:181] (0xc0001424d0) Data frame received for 3 I1116 09:40:46.099155 7 log.go:181] (0xc003e9ebe0) (3) Data frame handling I1116 09:40:46.099181 7 log.go:181] (0xc003e9ebe0) (3) Data frame sent I1116 09:40:46.099200 7 log.go:181] (0xc0001424d0) Data frame received for 3 I1116 09:40:46.099213 7 log.go:181] (0xc003e9ebe0) (3) Data frame handling I1116 09:40:46.099231 7 log.go:181] (0xc0001424d0) Data frame received for 5 I1116 09:40:46.099249 7 log.go:181] (0xc003e9ec80) (5) Data frame handling I1116 09:40:46.101124 7 log.go:181] (0xc0001424d0) Data frame received for 1 I1116 09:40:46.101158 7 log.go:181] (0xc003e9eb40) (1) Data frame handling I1116 09:40:46.101172 7 log.go:181] (0xc003e9eb40) (1) Data frame sent I1116 09:40:46.101249 7 log.go:181] (0xc0001424d0) (0xc003e9eb40) Stream removed, broadcasting: 1 I1116 09:40:46.101328 7 log.go:181] (0xc0001424d0) (0xc003e9eb40) Stream removed, broadcasting: 1 I1116 09:40:46.101356 7 log.go:181] (0xc0001424d0) (0xc003e9ebe0) Stream removed, broadcasting: 3 I1116 09:40:46.101393 7 log.go:181] (0xc0001424d0) (0xc003e9ec80) Stream removed, broadcasting: 5 Nov 16 09:40:46.101: INFO: Exec stderr: "" Nov 16 09:40:46.101: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:46.101: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:46.101478 7 log.go:181] (0xc0001424d0) Go away received I1116 09:40:46.138761 7 log.go:181] (0xc0030f2580) (0xc003e9ef00) Create stream I1116 09:40:46.138794 7 log.go:181] (0xc0030f2580) (0xc003e9ef00) Stream added, broadcasting: 1 I1116 09:40:46.140903 7 log.go:181] (0xc0030f2580) Reply frame received for 1 I1116 09:40:46.140948 7 log.go:181] (0xc0030f2580) (0xc0040352c0) Create stream I1116 09:40:46.140959 7 log.go:181] (0xc0030f2580) (0xc0040352c0) Stream added, broadcasting: 3 I1116 09:40:46.141703 7 log.go:181] (0xc0030f2580) Reply frame received for 3 I1116 09:40:46.141724 7 log.go:181] (0xc0030f2580) (0xc004035360) Create stream I1116 09:40:46.141731 7 log.go:181] (0xc0030f2580) (0xc004035360) Stream added, broadcasting: 5 I1116 09:40:46.142598 7 log.go:181] (0xc0030f2580) Reply frame received for 5 I1116 09:40:46.211335 7 log.go:181] (0xc0030f2580) Data frame received for 3 I1116 09:40:46.211358 7 log.go:181] (0xc0040352c0) (3) Data frame handling I1116 09:40:46.211365 7 log.go:181] (0xc0040352c0) (3) Data frame sent I1116 09:40:46.211376 7 log.go:181] (0xc0030f2580) Data frame received for 5 I1116 09:40:46.211383 7 log.go:181] (0xc004035360) (5) Data frame handling I1116 09:40:46.211494 7 log.go:181] (0xc0030f2580) Data frame received for 3 I1116 09:40:46.211516 7 log.go:181] (0xc0040352c0) (3) Data frame handling I1116 09:40:46.212500 7 log.go:181] (0xc0030f2580) Data frame received for 1 I1116 09:40:46.212520 7 log.go:181] (0xc003e9ef00) (1) Data frame handling I1116 09:40:46.212537 7 log.go:181] (0xc003e9ef00) (1) Data frame sent I1116 09:40:46.212548 7 log.go:181] (0xc0030f2580) (0xc003e9ef00) Stream removed, broadcasting: 1 I1116 09:40:46.212628 7 log.go:181] (0xc0030f2580) (0xc003e9ef00) Stream removed, broadcasting: 1 I1116 09:40:46.212637 7 log.go:181] (0xc0030f2580) (0xc0040352c0) Stream removed, broadcasting: 3 I1116 09:40:46.212730 7 log.go:181] (0xc0030f2580) (0xc004035360) Stream removed, broadcasting: 5 I1116 09:40:46.212911 7 log.go:181] (0xc0030f2580) Go away received Nov 16 09:40:46.212: INFO: Exec stderr: "" Nov 16 09:40:46.212: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2632 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:40:46.213: INFO: >>> kubeConfig: /root/.kube/config I1116 09:40:46.235744 7 log.go:181] (0xc0028d9340) (0xc0040355e0) Create stream I1116 09:40:46.235762 7 log.go:181] (0xc0028d9340) (0xc0040355e0) Stream added, broadcasting: 1 I1116 09:40:46.237861 7 log.go:181] (0xc0028d9340) Reply frame received for 1 I1116 09:40:46.237899 7 log.go:181] (0xc0028d9340) (0xc001e37400) Create stream I1116 09:40:46.237914 7 log.go:181] (0xc0028d9340) (0xc001e37400) Stream added, broadcasting: 3 I1116 09:40:46.244627 7 log.go:181] (0xc0028d9340) Reply frame received for 3 I1116 09:40:46.244661 7 log.go:181] (0xc0028d9340) (0xc004035680) Create stream I1116 09:40:46.244670 7 log.go:181] (0xc0028d9340) (0xc004035680) Stream added, broadcasting: 5 I1116 09:40:46.245511 7 log.go:181] (0xc0028d9340) Reply frame received for 5 I1116 09:40:46.309867 7 log.go:181] (0xc0028d9340) Data frame received for 5 I1116 09:40:46.309895 7 log.go:181] (0xc004035680) (5) Data frame handling I1116 09:40:46.309927 7 log.go:181] (0xc0028d9340) Data frame received for 3 I1116 09:40:46.309947 7 log.go:181] (0xc001e37400) (3) Data frame handling I1116 09:40:46.309960 7 log.go:181] (0xc001e37400) (3) Data frame sent I1116 09:40:46.309980 7 log.go:181] (0xc0028d9340) Data frame received for 3 I1116 09:40:46.309990 7 log.go:181] (0xc001e37400) (3) Data frame handling I1116 09:40:46.312017 7 log.go:181] (0xc0028d9340) Data frame received for 1 I1116 09:40:46.312034 7 log.go:181] (0xc0040355e0) (1) Data frame handling I1116 09:40:46.312046 7 log.go:181] (0xc0040355e0) (1) Data frame sent I1116 09:40:46.312185 7 log.go:181] (0xc0028d9340) (0xc0040355e0) Stream removed, broadcasting: 1 I1116 09:40:46.312219 7 log.go:181] (0xc0028d9340) Go away received I1116 09:40:46.312289 7 log.go:181] (0xc0028d9340) (0xc0040355e0) Stream removed, broadcasting: 1 I1116 09:40:46.312310 7 log.go:181] (0xc0028d9340) (0xc001e37400) Stream removed, broadcasting: 3 I1116 09:40:46.312320 7 log.go:181] (0xc0028d9340) (0xc004035680) Stream removed, broadcasting: 5 Nov 16 09:40:46.312: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:40:46.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2632" for this suite. • [SLOW TEST:11.407 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":2315,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:40:46.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8198.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8198.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8198.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8198.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8198.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8198.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 169.193.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.193.169_udp@PTR;check="$$(dig +tcp +noall +answer +search 169.193.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.193.169_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8198.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8198.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8198.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8198.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8198.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8198.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8198.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8198.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8198.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 169.193.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.193.169_udp@PTR;check="$$(dig +tcp +noall +answer +search 169.193.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.193.169_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 09:40:52.597: INFO: Unable to read wheezy_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:52.600: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:52.603: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:52.606: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:52.625: INFO: Unable to read jessie_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:52.629: INFO: Unable to read jessie_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:52.632: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:52.635: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:52.652: INFO: Lookups using dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592 failed for: [wheezy_udp@dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_udp@dns-test-service.dns-8198.svc.cluster.local jessie_tcp@dns-test-service.dns-8198.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local] Nov 16 09:40:57.661: INFO: Unable to read wheezy_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:57.664: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:57.667: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:57.669: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:57.685: INFO: Unable to read jessie_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:57.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:57.690: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:57.693: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:40:57.712: INFO: Lookups using dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592 failed for: [wheezy_udp@dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_udp@dns-test-service.dns-8198.svc.cluster.local jessie_tcp@dns-test-service.dns-8198.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local] Nov 16 09:41:02.658: INFO: Unable to read wheezy_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:02.662: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:02.665: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:02.694: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:02.715: INFO: Unable to read jessie_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:02.718: INFO: Unable to read jessie_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:02.721: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:02.724: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:02.748: INFO: Lookups using dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592 failed for: [wheezy_udp@dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_udp@dns-test-service.dns-8198.svc.cluster.local jessie_tcp@dns-test-service.dns-8198.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local] Nov 16 09:41:07.658: INFO: Unable to read wheezy_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:07.662: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:07.666: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:07.670: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:07.690: INFO: Unable to read jessie_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:07.693: INFO: Unable to read jessie_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:07.696: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:07.698: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:07.716: INFO: Lookups using dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592 failed for: [wheezy_udp@dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_udp@dns-test-service.dns-8198.svc.cluster.local jessie_tcp@dns-test-service.dns-8198.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local] Nov 16 09:41:12.657: INFO: Unable to read wheezy_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:12.661: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:12.664: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:12.666: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:12.685: INFO: Unable to read jessie_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:12.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:12.689: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:12.692: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:12.712: INFO: Lookups using dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592 failed for: [wheezy_udp@dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_udp@dns-test-service.dns-8198.svc.cluster.local jessie_tcp@dns-test-service.dns-8198.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local] Nov 16 09:41:17.657: INFO: Unable to read wheezy_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:17.661: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:17.665: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:17.669: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:17.688: INFO: Unable to read jessie_udp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:17.691: INFO: Unable to read jessie_tcp@dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:17.694: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:17.696: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local from pod dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592: the server could not find the requested resource (get pods dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592) Nov 16 09:41:17.711: INFO: Lookups using dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592 failed for: [wheezy_udp@dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@dns-test-service.dns-8198.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_udp@dns-test-service.dns-8198.svc.cluster.local jessie_tcp@dns-test-service.dns-8198.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8198.svc.cluster.local] Nov 16 09:41:22.924: INFO: DNS probes using dns-8198/dns-test-fb23f7ef-8bd6-47f2-87ac-e9d809c13592 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:41:24.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8198" for this suite. • [SLOW TEST:38.577 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":130,"skipped":2318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:41:24.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-sdkf STEP: Creating a pod to test atomic-volume-subpath Nov 16 09:41:25.111: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-sdkf" in namespace "subpath-3347" to be "Succeeded or Failed" Nov 16 09:41:25.304: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 192.081643ms Nov 16 09:41:27.309: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197662556s Nov 16 09:41:29.322: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 4.210251437s Nov 16 09:41:31.327: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 6.215316457s Nov 16 09:41:33.332: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 8.220163883s Nov 16 09:41:35.337: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 10.225598583s Nov 16 09:41:37.342: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 12.230772622s Nov 16 09:41:39.346: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 14.234023117s Nov 16 09:41:41.351: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 16.239147027s Nov 16 09:41:43.356: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 18.243991182s Nov 16 09:41:45.360: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 20.248595035s Nov 16 09:41:47.365: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Running", Reason="", readiness=true. Elapsed: 22.253570873s Nov 16 09:41:49.369: INFO: Pod "pod-subpath-test-secret-sdkf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.257516006s STEP: Saw pod success Nov 16 09:41:49.369: INFO: Pod "pod-subpath-test-secret-sdkf" satisfied condition "Succeeded or Failed" Nov 16 09:41:49.372: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-sdkf container test-container-subpath-secret-sdkf: STEP: delete the pod Nov 16 09:41:49.413: INFO: Waiting for pod pod-subpath-test-secret-sdkf to disappear Nov 16 09:41:49.420: INFO: Pod pod-subpath-test-secret-sdkf no longer exists STEP: Deleting pod pod-subpath-test-secret-sdkf Nov 16 09:41:49.420: INFO: Deleting pod "pod-subpath-test-secret-sdkf" in namespace "subpath-3347" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:41:49.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3347" for this suite. • [SLOW TEST:24.533 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":131,"skipped":2346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:41:49.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Nov 16 09:41:49.482: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:05.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-446" for this suite. • [SLOW TEST:16.212 seconds] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":132,"skipped":2382,"failed":0} SSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:05.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 16 09:42:05.748: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 16 09:42:05.751: INFO: starting watch STEP: patching STEP: updating Nov 16 09:42:05.771: INFO: waiting for watch events with expected annotations Nov 16 09:42:05.771: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:05.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-9511" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":133,"skipped":2388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:05.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-50e3bd11-6129-4dff-8c30-b22af91631ff STEP: Creating secret with name secret-projected-all-test-volume-5bf8aa2f-bebc-462c-8bf4-7bc51225b1bf STEP: Creating a pod to test Check all projections for projected volume plugin Nov 16 09:42:06.023: INFO: Waiting up to 5m0s for pod "projected-volume-aed9d7ac-d709-4bae-98ab-345170024825" in namespace "projected-6094" to be "Succeeded or Failed" Nov 16 09:42:06.043: INFO: Pod "projected-volume-aed9d7ac-d709-4bae-98ab-345170024825": Phase="Pending", Reason="", readiness=false. Elapsed: 19.646576ms Nov 16 09:42:08.192: INFO: Pod "projected-volume-aed9d7ac-d709-4bae-98ab-345170024825": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168444152s Nov 16 09:42:10.215: INFO: Pod "projected-volume-aed9d7ac-d709-4bae-98ab-345170024825": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.191634286s STEP: Saw pod success Nov 16 09:42:10.215: INFO: Pod "projected-volume-aed9d7ac-d709-4bae-98ab-345170024825" satisfied condition "Succeeded or Failed" Nov 16 09:42:10.218: INFO: Trying to get logs from node latest-worker pod projected-volume-aed9d7ac-d709-4bae-98ab-345170024825 container projected-all-volume-test: STEP: delete the pod Nov 16 09:42:10.249: INFO: Waiting for pod projected-volume-aed9d7ac-d709-4bae-98ab-345170024825 to disappear Nov 16 09:42:10.261: INFO: Pod projected-volume-aed9d7ac-d709-4bae-98ab-345170024825 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:10.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6094" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":134,"skipped":2412,"failed":0} SSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:10.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Nov 16 09:42:10.380: INFO: Created pod &Pod{ObjectMeta:{dns-1250 dns-1250 /api/v1/namespaces/dns-1250/pods/dns-1250 92e617fc-fbb2-437d-a15d-36037c2fd41d 9782366 0 2020-11-16 09:42:10 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-11-16 09:42:10 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pbqns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pbqns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pbqns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 09:42:10.395: INFO: The status of Pod dns-1250 is Pending, waiting for it to be Running (with Ready = true) Nov 16 09:42:12.400: INFO: The status of Pod dns-1250 is Pending, waiting for it to be Running (with Ready = true) Nov 16 09:42:14.399: INFO: The status of Pod dns-1250 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Nov 16 09:42:14.399: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1250 PodName:dns-1250 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:42:14.399: INFO: >>> kubeConfig: /root/.kube/config I1116 09:42:14.422837 7 log.go:181] (0xc0030f2790) (0xc0039481e0) Create stream I1116 09:42:14.422860 7 log.go:181] (0xc0030f2790) (0xc0039481e0) Stream added, broadcasting: 1 I1116 09:42:14.424071 7 log.go:181] (0xc0030f2790) Reply frame received for 1 I1116 09:42:14.424107 7 log.go:181] (0xc0030f2790) (0xc005074960) Create stream I1116 09:42:14.424120 7 log.go:181] (0xc0030f2790) (0xc005074960) Stream added, broadcasting: 3 I1116 09:42:14.424694 7 log.go:181] (0xc0030f2790) Reply frame received for 3 I1116 09:42:14.424714 7 log.go:181] (0xc0030f2790) (0xc000f281e0) Create stream I1116 09:42:14.424721 7 log.go:181] (0xc0030f2790) (0xc000f281e0) Stream added, broadcasting: 5 I1116 09:42:14.425405 7 log.go:181] (0xc0030f2790) Reply frame received for 5 I1116 09:42:14.495420 7 log.go:181] (0xc0030f2790) Data frame received for 3 I1116 09:42:14.495459 7 log.go:181] (0xc005074960) (3) Data frame handling I1116 09:42:14.495491 7 log.go:181] (0xc005074960) (3) Data frame sent I1116 09:42:14.496769 7 log.go:181] (0xc0030f2790) Data frame received for 3 I1116 09:42:14.496949 7 log.go:181] (0xc005074960) (3) Data frame handling I1116 09:42:14.497174 7 log.go:181] (0xc0030f2790) Data frame received for 5 I1116 09:42:14.497190 7 log.go:181] (0xc000f281e0) (5) Data frame handling I1116 09:42:14.498695 7 log.go:181] (0xc0030f2790) Data frame received for 1 I1116 09:42:14.498708 7 log.go:181] (0xc0039481e0) (1) Data frame handling I1116 09:42:14.498715 7 log.go:181] (0xc0039481e0) (1) Data frame sent I1116 09:42:14.498725 7 log.go:181] (0xc0030f2790) (0xc0039481e0) Stream removed, broadcasting: 1 I1116 09:42:14.498799 7 log.go:181] (0xc0030f2790) Go away received I1116 09:42:14.498842 7 log.go:181] (0xc0030f2790) (0xc0039481e0) Stream removed, broadcasting: 1 I1116 09:42:14.498867 7 log.go:181] (0xc0030f2790) (0xc005074960) Stream removed, broadcasting: 3 I1116 09:42:14.498879 7 log.go:181] (0xc0030f2790) (0xc000f281e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Nov 16 09:42:14.498: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1250 PodName:dns-1250 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:42:14.498: INFO: >>> kubeConfig: /root/.kube/config I1116 09:42:14.534072 7 log.go:181] (0xc0006adad0) (0xc000f286e0) Create stream I1116 09:42:14.534097 7 log.go:181] (0xc0006adad0) (0xc000f286e0) Stream added, broadcasting: 1 I1116 09:42:14.536214 7 log.go:181] (0xc0006adad0) Reply frame received for 1 I1116 09:42:14.536264 7 log.go:181] (0xc0006adad0) (0xc003877540) Create stream I1116 09:42:14.536287 7 log.go:181] (0xc0006adad0) (0xc003877540) Stream added, broadcasting: 3 I1116 09:42:14.537824 7 log.go:181] (0xc0006adad0) Reply frame received for 3 I1116 09:42:14.537930 7 log.go:181] (0xc0006adad0) (0xc00523f7c0) Create stream I1116 09:42:14.537960 7 log.go:181] (0xc0006adad0) (0xc00523f7c0) Stream added, broadcasting: 5 I1116 09:42:14.540990 7 log.go:181] (0xc0006adad0) Reply frame received for 5 I1116 09:42:14.607131 7 log.go:181] (0xc0006adad0) Data frame received for 3 I1116 09:42:14.607189 7 log.go:181] (0xc003877540) (3) Data frame handling I1116 09:42:14.607212 7 log.go:181] (0xc003877540) (3) Data frame sent I1116 09:42:14.609171 7 log.go:181] (0xc0006adad0) Data frame received for 5 I1116 09:42:14.609191 7 log.go:181] (0xc00523f7c0) (5) Data frame handling I1116 09:42:14.609217 7 log.go:181] (0xc0006adad0) Data frame received for 3 I1116 09:42:14.609230 7 log.go:181] (0xc003877540) (3) Data frame handling I1116 09:42:14.611378 7 log.go:181] (0xc0006adad0) Data frame received for 1 I1116 09:42:14.611423 7 log.go:181] (0xc000f286e0) (1) Data frame handling I1116 09:42:14.611452 7 log.go:181] (0xc000f286e0) (1) Data frame sent I1116 09:42:14.611477 7 log.go:181] (0xc0006adad0) (0xc000f286e0) Stream removed, broadcasting: 1 I1116 09:42:14.611511 7 log.go:181] (0xc0006adad0) Go away received I1116 09:42:14.611607 7 log.go:181] (0xc0006adad0) (0xc000f286e0) Stream removed, broadcasting: 1 I1116 09:42:14.611638 7 log.go:181] (0xc0006adad0) (0xc003877540) Stream removed, broadcasting: 3 I1116 09:42:14.611658 7 log.go:181] (0xc0006adad0) (0xc00523f7c0) Stream removed, broadcasting: 5 Nov 16 09:42:14.611: INFO: Deleting pod dns-1250... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:14.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1250" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":135,"skipped":2416,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:14.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Nov 16 09:42:14.970: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2167 /api/v1/namespaces/watch-2167/configmaps/e2e-watch-test-resource-version 00fbcbc1-a015-4f82-b1a4-5d871f889054 9782409 0 2020-11-16 09:42:14 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-11-16 09:42:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 09:42:14.970: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2167 /api/v1/namespaces/watch-2167/configmaps/e2e-watch-test-resource-version 00fbcbc1-a015-4f82-b1a4-5d871f889054 9782410 0 2020-11-16 09:42:14 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-11-16 09:42:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:14.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2167" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":136,"skipped":2422,"failed":0} SSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:14.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:15.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9979" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":137,"skipped":2427,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:15.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:15.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7805" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":138,"skipped":2431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:15.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-0395815c-4dd6-4a39-a560-d02f3c7855cc STEP: Creating secret with name s-test-opt-upd-a2573d0a-9447-47e1-a512-db115404cd2e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0395815c-4dd6-4a39-a560-d02f3c7855cc STEP: Updating secret s-test-opt-upd-a2573d0a-9447-47e1-a512-db115404cd2e STEP: Creating secret with name s-test-opt-create-d8e47071-b8ef-45b6-b211-86c3d97d2a0c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:24.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1061" for this suite. • [SLOW TEST:8.300 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":139,"skipped":2461,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:24.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:42:24.822: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:42:26.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116544, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116544, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116544, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741116544, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:42:29.976: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:42:30.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-157" for this suite. STEP: Destroying namespace "webhook-157-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.434 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":140,"skipped":2481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:42:30.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 16 09:42:30.600: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 16 09:42:30.621: INFO: Waiting for terminating namespaces to be deleted... Nov 16 09:42:30.652: INFO: Logging pods the apiserver thinks is on node latest-worker before test Nov 16 09:42:30.686: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 09:42:30.686: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 09:42:30.686: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 09:42:30.686: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 09:42:30.686: INFO: pod-projected-secrets-69e4b7d6-1de1-4790-bede-f3e42279731b from projected-1061 started at 2020-11-16 09:42:16 +0000 UTC (3 container statuses recorded) Nov 16 09:42:30.686: INFO: Container creates-volume-test ready: true, restart count 0 Nov 16 09:42:30.686: INFO: Container dels-volume-test ready: true, restart count 0 Nov 16 09:42:30.686: INFO: Container upds-volume-test ready: true, restart count 0 Nov 16 09:42:30.686: INFO: sample-webhook-deployment-cbccbf6bb-sxz4d from webhook-157 started at 2020-11-16 09:42:24 +0000 UTC (1 container statuses recorded) Nov 16 09:42:30.686: INFO: Container sample-webhook ready: true, restart count 0 Nov 16 09:42:30.686: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Nov 16 09:42:30.691: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Nov 16 09:42:30.691: INFO: Container coredns ready: true, restart count 0 Nov 16 09:42:30.691: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 09:42:30.691: INFO: Container coredns ready: true, restart count 0 Nov 16 09:42:30.691: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 09:42:30.691: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 09:42:30.691: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 09:42:30.691: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 09:42:30.691: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 09:42:30.691: INFO: Container local-path-provisioner ready: true, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e2248b37-fdca-4a3e-98d3-4018a27f06d8 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-e2248b37-fdca-4a3e-98d3-4018a27f06d8 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e2248b37-fdca-4a3e-98d3-4018a27f06d8 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:47:38.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4075" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.419 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":141,"skipped":2505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:47:38.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:47:39.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9738" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":142,"skipped":2534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:47:39.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Nov 16 09:47:39.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3448' Nov 16 09:47:44.201: INFO: stderr: "" Nov 16 09:47:44.202: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 16 09:47:45.205: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 09:47:45.205: INFO: Found 0 / 1 Nov 16 09:47:46.205: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 09:47:46.205: INFO: Found 0 / 1 Nov 16 09:47:47.206: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 09:47:47.206: INFO: Found 0 / 1 Nov 16 09:47:48.207: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 09:47:48.207: INFO: Found 1 / 1 Nov 16 09:47:48.207: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Nov 16 09:47:48.210: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 09:47:48.210: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 16 09:47:48.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config patch pod agnhost-primary-nkzq7 --namespace=kubectl-3448 -p {"metadata":{"annotations":{"x":"y"}}}' Nov 16 09:47:48.339: INFO: stderr: "" Nov 16 09:47:48.339: INFO: stdout: "pod/agnhost-primary-nkzq7 patched\n" STEP: checking annotations Nov 16 09:47:48.342: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 09:47:48.342: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:47:48.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3448" for this suite. • [SLOW TEST:9.235 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":143,"skipped":2578,"failed":0} S ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:47:48.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-7984 STEP: creating replication controller nodeport-test in namespace services-7984 I1116 09:47:48.498681 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7984, replica count: 2 I1116 09:47:51.549244 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:47:54.549471 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:47:54.549: INFO: Creating new exec pod Nov 16 09:47:59.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7984 execpodvf7cd -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Nov 16 09:47:59.849: INFO: stderr: "I1116 09:47:59.736803 1832 log.go:181] (0xc0001fbe40) (0xc0005d8960) Create stream\nI1116 09:47:59.736941 1832 log.go:181] (0xc0001fbe40) (0xc0005d8960) Stream added, broadcasting: 1\nI1116 09:47:59.741924 1832 log.go:181] (0xc0001fbe40) Reply frame received for 1\nI1116 09:47:59.742485 1832 log.go:181] (0xc0001fbe40) (0xc000cb20a0) Create stream\nI1116 09:47:59.742518 1832 log.go:181] (0xc0001fbe40) (0xc000cb20a0) Stream added, broadcasting: 3\nI1116 09:47:59.743556 1832 log.go:181] (0xc0001fbe40) Reply frame received for 3\nI1116 09:47:59.743606 1832 log.go:181] (0xc0001fbe40) (0xc000cb2140) Create stream\nI1116 09:47:59.743633 1832 log.go:181] (0xc0001fbe40) (0xc000cb2140) Stream added, broadcasting: 5\nI1116 09:47:59.744563 1832 log.go:181] (0xc0001fbe40) Reply frame received for 5\nI1116 09:47:59.841185 1832 log.go:181] (0xc0001fbe40) Data frame received for 3\nI1116 09:47:59.841233 1832 log.go:181] (0xc000cb20a0) (3) Data frame handling\nI1116 09:47:59.841266 1832 log.go:181] (0xc0001fbe40) Data frame received for 5\nI1116 09:47:59.841282 1832 log.go:181] (0xc000cb2140) (5) Data frame handling\nI1116 09:47:59.841309 1832 log.go:181] (0xc000cb2140) (5) Data frame sent\nI1116 09:47:59.841326 1832 log.go:181] (0xc0001fbe40) Data frame received for 5\nI1116 09:47:59.841339 1832 log.go:181] (0xc000cb2140) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1116 09:47:59.842993 1832 log.go:181] (0xc0001fbe40) Data frame received for 1\nI1116 09:47:59.843015 1832 log.go:181] (0xc0005d8960) (1) Data frame handling\nI1116 09:47:59.843032 1832 log.go:181] (0xc0005d8960) (1) Data frame sent\nI1116 09:47:59.843043 1832 log.go:181] (0xc0001fbe40) (0xc0005d8960) Stream removed, broadcasting: 1\nI1116 09:47:59.843053 1832 log.go:181] (0xc0001fbe40) Go away received\nI1116 09:47:59.843491 1832 log.go:181] (0xc0001fbe40) (0xc0005d8960) Stream removed, broadcasting: 1\nI1116 09:47:59.843511 1832 log.go:181] (0xc0001fbe40) (0xc000cb20a0) Stream removed, broadcasting: 3\nI1116 09:47:59.843524 1832 log.go:181] (0xc0001fbe40) (0xc000cb2140) Stream removed, broadcasting: 5\n" Nov 16 09:47:59.849: INFO: stdout: "" Nov 16 09:47:59.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7984 execpodvf7cd -- /bin/sh -x -c nc -zv -t -w 2 10.109.215.170 80' Nov 16 09:48:00.090: INFO: stderr: "I1116 09:48:00.011933 1850 log.go:181] (0xc00003a4d0) (0xc0007d2140) Create stream\nI1116 09:48:00.012014 1850 log.go:181] (0xc00003a4d0) (0xc0007d2140) Stream added, broadcasting: 1\nI1116 09:48:00.013676 1850 log.go:181] (0xc00003a4d0) Reply frame received for 1\nI1116 09:48:00.013707 1850 log.go:181] (0xc00003a4d0) (0xc0008f0000) Create stream\nI1116 09:48:00.013718 1850 log.go:181] (0xc00003a4d0) (0xc0008f0000) Stream added, broadcasting: 3\nI1116 09:48:00.014479 1850 log.go:181] (0xc00003a4d0) Reply frame received for 3\nI1116 09:48:00.014524 1850 log.go:181] (0xc00003a4d0) (0xc0004ab2c0) Create stream\nI1116 09:48:00.014541 1850 log.go:181] (0xc00003a4d0) (0xc0004ab2c0) Stream added, broadcasting: 5\nI1116 09:48:00.015307 1850 log.go:181] (0xc00003a4d0) Reply frame received for 5\nI1116 09:48:00.081822 1850 log.go:181] (0xc00003a4d0) Data frame received for 3\nI1116 09:48:00.081849 1850 log.go:181] (0xc0008f0000) (3) Data frame handling\nI1116 09:48:00.081888 1850 log.go:181] (0xc00003a4d0) Data frame received for 5\nI1116 09:48:00.081915 1850 log.go:181] (0xc0004ab2c0) (5) Data frame handling\nI1116 09:48:00.081928 1850 log.go:181] (0xc0004ab2c0) (5) Data frame sent\nI1116 09:48:00.081936 1850 log.go:181] (0xc00003a4d0) Data frame received for 5\nI1116 09:48:00.081949 1850 log.go:181] (0xc0004ab2c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.215.170 80\nConnection to 10.109.215.170 80 port [tcp/http] succeeded!\nI1116 09:48:00.082872 1850 log.go:181] (0xc00003a4d0) Data frame received for 1\nI1116 09:48:00.082905 1850 log.go:181] (0xc0007d2140) (1) Data frame handling\nI1116 09:48:00.082934 1850 log.go:181] (0xc0007d2140) (1) Data frame sent\nI1116 09:48:00.082962 1850 log.go:181] (0xc00003a4d0) (0xc0007d2140) Stream removed, broadcasting: 1\nI1116 09:48:00.082982 1850 log.go:181] (0xc00003a4d0) Go away received\nI1116 09:48:00.083392 1850 log.go:181] (0xc00003a4d0) (0xc0007d2140) Stream removed, broadcasting: 1\nI1116 09:48:00.083410 1850 log.go:181] (0xc00003a4d0) (0xc0008f0000) Stream removed, broadcasting: 3\nI1116 09:48:00.083418 1850 log.go:181] (0xc00003a4d0) (0xc0004ab2c0) Stream removed, broadcasting: 5\n" Nov 16 09:48:00.090: INFO: stdout: "" Nov 16 09:48:00.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7984 execpodvf7cd -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30206' Nov 16 09:48:00.318: INFO: stderr: "I1116 09:48:00.238147 1868 log.go:181] (0xc000d911e0) (0xc000924780) Create stream\nI1116 09:48:00.238206 1868 log.go:181] (0xc000d911e0) (0xc000924780) Stream added, broadcasting: 1\nI1116 09:48:00.242309 1868 log.go:181] (0xc000d911e0) Reply frame received for 1\nI1116 09:48:00.242348 1868 log.go:181] (0xc000d911e0) (0xc000924000) Create stream\nI1116 09:48:00.242360 1868 log.go:181] (0xc000d911e0) (0xc000924000) Stream added, broadcasting: 3\nI1116 09:48:00.243112 1868 log.go:181] (0xc000d911e0) Reply frame received for 3\nI1116 09:48:00.243157 1868 log.go:181] (0xc000d911e0) (0xc000d00000) Create stream\nI1116 09:48:00.243168 1868 log.go:181] (0xc000d911e0) (0xc000d00000) Stream added, broadcasting: 5\nI1116 09:48:00.243948 1868 log.go:181] (0xc000d911e0) Reply frame received for 5\nI1116 09:48:00.309503 1868 log.go:181] (0xc000d911e0) Data frame received for 3\nI1116 09:48:00.309534 1868 log.go:181] (0xc000924000) (3) Data frame handling\nI1116 09:48:00.309554 1868 log.go:181] (0xc000d911e0) Data frame received for 5\nI1116 09:48:00.309579 1868 log.go:181] (0xc000d00000) (5) Data frame handling\nI1116 09:48:00.309601 1868 log.go:181] (0xc000d00000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 30206\nConnection to 172.18.0.15 30206 port [tcp/30206] succeeded!\nI1116 09:48:00.309630 1868 log.go:181] (0xc000d911e0) Data frame received for 5\nI1116 09:48:00.309649 1868 log.go:181] (0xc000d00000) (5) Data frame handling\nI1116 09:48:00.310765 1868 log.go:181] (0xc000d911e0) Data frame received for 1\nI1116 09:48:00.310783 1868 log.go:181] (0xc000924780) (1) Data frame handling\nI1116 09:48:00.310803 1868 log.go:181] (0xc000924780) (1) Data frame sent\nI1116 09:48:00.310824 1868 log.go:181] (0xc000d911e0) (0xc000924780) Stream removed, broadcasting: 1\nI1116 09:48:00.310873 1868 log.go:181] (0xc000d911e0) Go away received\nI1116 09:48:00.311236 1868 log.go:181] (0xc000d911e0) (0xc000924780) Stream removed, broadcasting: 1\nI1116 09:48:00.311253 1868 log.go:181] (0xc000d911e0) (0xc000924000) Stream removed, broadcasting: 3\nI1116 09:48:00.311262 1868 log.go:181] (0xc000d911e0) (0xc000d00000) Stream removed, broadcasting: 5\n" Nov 16 09:48:00.318: INFO: stdout: "" Nov 16 09:48:00.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7984 execpodvf7cd -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30206' Nov 16 09:48:00.545: INFO: stderr: "I1116 09:48:00.451128 1886 log.go:181] (0xc000930dc0) (0xc0003bf680) Create stream\nI1116 09:48:00.451182 1886 log.go:181] (0xc000930dc0) (0xc0003bf680) Stream added, broadcasting: 1\nI1116 09:48:00.456718 1886 log.go:181] (0xc000930dc0) Reply frame received for 1\nI1116 09:48:00.456751 1886 log.go:181] (0xc000930dc0) (0xc000624000) Create stream\nI1116 09:48:00.456766 1886 log.go:181] (0xc000930dc0) (0xc000624000) Stream added, broadcasting: 3\nI1116 09:48:00.457877 1886 log.go:181] (0xc000930dc0) Reply frame received for 3\nI1116 09:48:00.457921 1886 log.go:181] (0xc000930dc0) (0xc0004b0280) Create stream\nI1116 09:48:00.457941 1886 log.go:181] (0xc000930dc0) (0xc0004b0280) Stream added, broadcasting: 5\nI1116 09:48:00.458813 1886 log.go:181] (0xc000930dc0) Reply frame received for 5\nI1116 09:48:00.536397 1886 log.go:181] (0xc000930dc0) Data frame received for 3\nI1116 09:48:00.536433 1886 log.go:181] (0xc000624000) (3) Data frame handling\nI1116 09:48:00.536501 1886 log.go:181] (0xc000930dc0) Data frame received for 5\nI1116 09:48:00.536547 1886 log.go:181] (0xc0004b0280) (5) Data frame handling\nI1116 09:48:00.536587 1886 log.go:181] (0xc0004b0280) (5) Data frame sent\nI1116 09:48:00.536622 1886 log.go:181] (0xc000930dc0) Data frame received for 5\nI1116 09:48:00.536646 1886 log.go:181] (0xc0004b0280) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30206\nConnection to 172.18.0.14 30206 port [tcp/30206] succeeded!\nI1116 09:48:00.538517 1886 log.go:181] (0xc000930dc0) Data frame received for 1\nI1116 09:48:00.538546 1886 log.go:181] (0xc0003bf680) (1) Data frame handling\nI1116 09:48:00.538566 1886 log.go:181] (0xc0003bf680) (1) Data frame sent\nI1116 09:48:00.538593 1886 log.go:181] (0xc000930dc0) (0xc0003bf680) Stream removed, broadcasting: 1\nI1116 09:48:00.538746 1886 log.go:181] (0xc000930dc0) Go away received\nI1116 09:48:00.539286 1886 log.go:181] (0xc000930dc0) (0xc0003bf680) Stream removed, broadcasting: 1\nI1116 09:48:00.539327 1886 log.go:181] (0xc000930dc0) (0xc000624000) Stream removed, broadcasting: 3\nI1116 09:48:00.539348 1886 log.go:181] (0xc000930dc0) (0xc0004b0280) Stream removed, broadcasting: 5\n" Nov 16 09:48:00.545: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:48:00.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7984" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.204 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":144,"skipped":2579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:48:00.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 16 09:48:08.743: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 16 09:48:08.758: INFO: Pod pod-with-poststart-exec-hook still exists Nov 16 09:48:10.758: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 16 09:48:10.762: INFO: Pod pod-with-poststart-exec-hook still exists Nov 16 09:48:12.758: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 16 09:48:12.763: INFO: Pod pod-with-poststart-exec-hook still exists Nov 16 09:48:14.758: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 16 09:48:14.762: INFO: Pod pod-with-poststart-exec-hook still exists Nov 16 09:48:16.758: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 16 09:48:16.762: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:48:16.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8166" for this suite. • [SLOW TEST:16.218 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":145,"skipped":2613,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:48:16.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:48:27.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4137" for this suite. • [SLOW TEST:11.160 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":146,"skipped":2642,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:48:27.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Nov 16 09:48:28.048: INFO: Waiting up to 1m0s for all nodes to be ready Nov 16 09:49:28.071: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:49:28.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Nov 16 09:49:32.190: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:49:46.382: INFO: pods created so far: [1 1 1] Nov 16 09:49:46.382: INFO: length of pods created so far: 3 Nov 16 09:49:54.392: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:50:01.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-9652" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:50:01.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9527" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:93.635 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":147,"skipped":2653,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:50:01.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-5c5e472a-f007-43f7-b3ce-db48ddcfbbaf STEP: Creating a pod to test consume secrets Nov 16 09:50:01.731: INFO: Waiting up to 5m0s for pod "pod-secrets-3c16025b-2e74-489a-8408-cdb631bd511a" in namespace "secrets-2697" to be "Succeeded or Failed" Nov 16 09:50:01.743: INFO: Pod "pod-secrets-3c16025b-2e74-489a-8408-cdb631bd511a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.828002ms Nov 16 09:50:03.747: INFO: Pod "pod-secrets-3c16025b-2e74-489a-8408-cdb631bd511a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015979764s Nov 16 09:50:05.751: INFO: Pod "pod-secrets-3c16025b-2e74-489a-8408-cdb631bd511a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020030717s STEP: Saw pod success Nov 16 09:50:05.751: INFO: Pod "pod-secrets-3c16025b-2e74-489a-8408-cdb631bd511a" satisfied condition "Succeeded or Failed" Nov 16 09:50:05.754: INFO: Trying to get logs from node latest-worker pod pod-secrets-3c16025b-2e74-489a-8408-cdb631bd511a container secret-volume-test: STEP: delete the pod Nov 16 09:50:05.786: INFO: Waiting for pod pod-secrets-3c16025b-2e74-489a-8408-cdb631bd511a to disappear Nov 16 09:50:05.790: INFO: Pod pod-secrets-3c16025b-2e74-489a-8408-cdb631bd511a no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:50:05.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2697" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":148,"skipped":2656,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:50:05.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:50:06.302: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:50:08.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117006, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117006, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117006, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117006, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 09:50:10.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117006, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117006, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117006, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117006, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:50:13.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:50:25.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7533" for this suite. STEP: Destroying namespace "webhook-7533-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.111 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":149,"skipped":2663,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:50:25.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Nov 16 09:50:26.491: INFO: created pod pod-service-account-defaultsa Nov 16 09:50:26.491: INFO: pod pod-service-account-defaultsa service account token volume mount: true Nov 16 09:50:26.629: INFO: created pod pod-service-account-mountsa Nov 16 09:50:26.629: INFO: pod pod-service-account-mountsa service account token volume mount: true Nov 16 09:50:26.633: INFO: created pod pod-service-account-nomountsa Nov 16 09:50:26.633: INFO: pod pod-service-account-nomountsa service account token volume mount: false Nov 16 09:50:26.711: INFO: created pod pod-service-account-defaultsa-mountspec Nov 16 09:50:26.711: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Nov 16 09:50:26.785: INFO: created pod pod-service-account-mountsa-mountspec Nov 16 09:50:26.785: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Nov 16 09:50:26.849: INFO: created pod pod-service-account-nomountsa-mountspec Nov 16 09:50:26.850: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Nov 16 09:50:27.094: INFO: created pod pod-service-account-defaultsa-nomountspec Nov 16 09:50:27.094: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Nov 16 09:50:27.102: INFO: created pod pod-service-account-mountsa-nomountspec Nov 16 09:50:27.102: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Nov 16 09:50:27.341: INFO: created pod pod-service-account-nomountsa-nomountspec Nov 16 09:50:27.341: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:50:27.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6901" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":150,"skipped":2665,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:50:27.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 09:50:27.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6" in namespace "downward-api-2072" to be "Succeeded or Failed" Nov 16 09:50:28.004: INFO: Pod "downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.907466ms Nov 16 09:50:30.009: INFO: Pod "downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047829333s Nov 16 09:50:32.409: INFO: Pod "downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447540717s Nov 16 09:50:34.412: INFO: Pod "downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451214615s Nov 16 09:50:36.635: INFO: Pod "downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.67377258s Nov 16 09:50:39.042: INFO: Pod "downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.081157214s STEP: Saw pod success Nov 16 09:50:39.042: INFO: Pod "downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6" satisfied condition "Succeeded or Failed" Nov 16 09:50:39.046: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6 container client-container: STEP: delete the pod Nov 16 09:50:39.344: INFO: Waiting for pod downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6 to disappear Nov 16 09:50:39.533: INFO: Pod downwardapi-volume-a2a6fbfe-7ba0-4922-988c-03fccc4179c6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:50:39.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2072" for this suite. • [SLOW TEST:12.097 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2667,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:50:39.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8884 STEP: creating service affinity-nodeport-transition in namespace services-8884 STEP: creating replication controller affinity-nodeport-transition in namespace services-8884 I1116 09:50:39.890271 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-8884, replica count: 3 I1116 09:50:42.940759 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:50:45.941108 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:50:48.941322 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:50:48.953: INFO: Creating new exec pod Nov 16 09:50:53.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-8884 execpod-affinity6jbj8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Nov 16 09:50:54.223: INFO: stderr: "I1116 09:50:54.123619 1904 log.go:181] (0xc00003ba20) (0xc0005b4be0) Create stream\nI1116 09:50:54.123678 1904 log.go:181] (0xc00003ba20) (0xc0005b4be0) Stream added, broadcasting: 1\nI1116 09:50:54.127248 1904 log.go:181] (0xc00003ba20) Reply frame received for 1\nI1116 09:50:54.127280 1904 log.go:181] (0xc00003ba20) (0xc0005b4000) Create stream\nI1116 09:50:54.127288 1904 log.go:181] (0xc00003ba20) (0xc0005b4000) Stream added, broadcasting: 3\nI1116 09:50:54.128123 1904 log.go:181] (0xc00003ba20) Reply frame received for 3\nI1116 09:50:54.128150 1904 log.go:181] (0xc00003ba20) (0xc000c420a0) Create stream\nI1116 09:50:54.128157 1904 log.go:181] (0xc00003ba20) (0xc000c420a0) Stream added, broadcasting: 5\nI1116 09:50:54.129110 1904 log.go:181] (0xc00003ba20) Reply frame received for 5\nI1116 09:50:54.190073 1904 log.go:181] (0xc00003ba20) Data frame received for 5\nI1116 09:50:54.190110 1904 log.go:181] (0xc000c420a0) (5) Data frame handling\nI1116 09:50:54.190128 1904 log.go:181] (0xc000c420a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI1116 09:50:54.213818 1904 log.go:181] (0xc00003ba20) Data frame received for 5\nI1116 09:50:54.213848 1904 log.go:181] (0xc000c420a0) (5) Data frame handling\nI1116 09:50:54.213865 1904 log.go:181] (0xc000c420a0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI1116 09:50:54.214179 1904 log.go:181] (0xc00003ba20) Data frame received for 5\nI1116 09:50:54.214216 1904 log.go:181] (0xc000c420a0) (5) Data frame handling\nI1116 09:50:54.214392 1904 log.go:181] (0xc00003ba20) Data frame received for 3\nI1116 09:50:54.214428 1904 log.go:181] (0xc0005b4000) (3) Data frame handling\nI1116 09:50:54.216008 1904 log.go:181] (0xc00003ba20) Data frame received for 1\nI1116 09:50:54.216027 1904 log.go:181] (0xc0005b4be0) (1) Data frame handling\nI1116 09:50:54.216035 1904 log.go:181] (0xc0005b4be0) (1) Data frame sent\nI1116 09:50:54.216229 1904 log.go:181] (0xc00003ba20) (0xc0005b4be0) Stream removed, broadcasting: 1\nI1116 09:50:54.216282 1904 log.go:181] (0xc00003ba20) Go away received\nI1116 09:50:54.216741 1904 log.go:181] (0xc00003ba20) (0xc0005b4be0) Stream removed, broadcasting: 1\nI1116 09:50:54.216764 1904 log.go:181] (0xc00003ba20) (0xc0005b4000) Stream removed, broadcasting: 3\nI1116 09:50:54.216775 1904 log.go:181] (0xc00003ba20) (0xc000c420a0) Stream removed, broadcasting: 5\n" Nov 16 09:50:54.223: INFO: stdout: "" Nov 16 09:50:54.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-8884 execpod-affinity6jbj8 -- /bin/sh -x -c nc -zv -t -w 2 10.110.30.113 80' Nov 16 09:50:54.449: INFO: stderr: "I1116 09:50:54.376189 1923 log.go:181] (0xc00003ac60) (0xc000e68500) Create stream\nI1116 09:50:54.376273 1923 log.go:181] (0xc00003ac60) (0xc000e68500) Stream added, broadcasting: 1\nI1116 09:50:54.378177 1923 log.go:181] (0xc00003ac60) Reply frame received for 1\nI1116 09:50:54.378225 1923 log.go:181] (0xc00003ac60) (0xc0009248c0) Create stream\nI1116 09:50:54.378236 1923 log.go:181] (0xc00003ac60) (0xc0009248c0) Stream added, broadcasting: 3\nI1116 09:50:54.379118 1923 log.go:181] (0xc00003ac60) Reply frame received for 3\nI1116 09:50:54.379173 1923 log.go:181] (0xc00003ac60) (0xc0009b6140) Create stream\nI1116 09:50:54.379192 1923 log.go:181] (0xc00003ac60) (0xc0009b6140) Stream added, broadcasting: 5\nI1116 09:50:54.380163 1923 log.go:181] (0xc00003ac60) Reply frame received for 5\nI1116 09:50:54.442890 1923 log.go:181] (0xc00003ac60) Data frame received for 3\nI1116 09:50:54.442942 1923 log.go:181] (0xc0009248c0) (3) Data frame handling\nI1116 09:50:54.442969 1923 log.go:181] (0xc00003ac60) Data frame received for 5\nI1116 09:50:54.442978 1923 log.go:181] (0xc0009b6140) (5) Data frame handling\nI1116 09:50:54.442989 1923 log.go:181] (0xc0009b6140) (5) Data frame sent\nI1116 09:50:54.442996 1923 log.go:181] (0xc00003ac60) Data frame received for 5\nI1116 09:50:54.443002 1923 log.go:181] (0xc0009b6140) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.30.113 80\nConnection to 10.110.30.113 80 port [tcp/http] succeeded!\nI1116 09:50:54.444169 1923 log.go:181] (0xc00003ac60) Data frame received for 1\nI1116 09:50:54.444188 1923 log.go:181] (0xc000e68500) (1) Data frame handling\nI1116 09:50:54.444199 1923 log.go:181] (0xc000e68500) (1) Data frame sent\nI1116 09:50:54.444216 1923 log.go:181] (0xc00003ac60) (0xc000e68500) Stream removed, broadcasting: 1\nI1116 09:50:54.444234 1923 log.go:181] (0xc00003ac60) Go away received\nI1116 09:50:54.444616 1923 log.go:181] (0xc00003ac60) (0xc000e68500) Stream removed, broadcasting: 1\nI1116 09:50:54.444634 1923 log.go:181] (0xc00003ac60) (0xc0009248c0) Stream removed, broadcasting: 3\nI1116 09:50:54.444642 1923 log.go:181] (0xc00003ac60) (0xc0009b6140) Stream removed, broadcasting: 5\n" Nov 16 09:50:54.450: INFO: stdout: "" Nov 16 09:50:54.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-8884 execpod-affinity6jbj8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 32267' Nov 16 09:50:54.644: INFO: stderr: "I1116 09:50:54.567801 1941 log.go:181] (0xc00012afd0) (0xc000e10960) Create stream\nI1116 09:50:54.567866 1941 log.go:181] (0xc00012afd0) (0xc000e10960) Stream added, broadcasting: 1\nI1116 09:50:54.571848 1941 log.go:181] (0xc00012afd0) Reply frame received for 1\nI1116 09:50:54.571893 1941 log.go:181] (0xc00012afd0) (0xc000ca40a0) Create stream\nI1116 09:50:54.571909 1941 log.go:181] (0xc00012afd0) (0xc000ca40a0) Stream added, broadcasting: 3\nI1116 09:50:54.572819 1941 log.go:181] (0xc00012afd0) Reply frame received for 3\nI1116 09:50:54.572981 1941 log.go:181] (0xc00012afd0) (0xc000e10000) Create stream\nI1116 09:50:54.573005 1941 log.go:181] (0xc00012afd0) (0xc000e10000) Stream added, broadcasting: 5\nI1116 09:50:54.573791 1941 log.go:181] (0xc00012afd0) Reply frame received for 5\nI1116 09:50:54.635279 1941 log.go:181] (0xc00012afd0) Data frame received for 5\nI1116 09:50:54.635330 1941 log.go:181] (0xc000e10000) (5) Data frame handling\nI1116 09:50:54.635356 1941 log.go:181] (0xc000e10000) (5) Data frame sent\nI1116 09:50:54.635374 1941 log.go:181] (0xc00012afd0) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.15 32267\nI1116 09:50:54.635404 1941 log.go:181] (0xc00012afd0) Data frame received for 3\nI1116 09:50:54.635434 1941 log.go:181] (0xc000ca40a0) (3) Data frame handling\nI1116 09:50:54.635469 1941 log.go:181] (0xc000e10000) (5) Data frame handling\nI1116 09:50:54.635489 1941 log.go:181] (0xc000e10000) (5) Data frame sent\nConnection to 172.18.0.15 32267 port [tcp/32267] succeeded!\nI1116 09:50:54.635641 1941 log.go:181] (0xc00012afd0) Data frame received for 5\nI1116 09:50:54.635653 1941 log.go:181] (0xc000e10000) (5) Data frame handling\nI1116 09:50:54.637134 1941 log.go:181] (0xc00012afd0) Data frame received for 1\nI1116 09:50:54.637162 1941 log.go:181] (0xc000e10960) (1) Data frame handling\nI1116 09:50:54.637188 1941 log.go:181] (0xc000e10960) (1) Data frame sent\nI1116 09:50:54.637271 1941 log.go:181] (0xc00012afd0) (0xc000e10960) Stream removed, broadcasting: 1\nI1116 09:50:54.637384 1941 log.go:181] (0xc00012afd0) Go away received\nI1116 09:50:54.637689 1941 log.go:181] (0xc00012afd0) (0xc000e10960) Stream removed, broadcasting: 1\nI1116 09:50:54.637707 1941 log.go:181] (0xc00012afd0) (0xc000ca40a0) Stream removed, broadcasting: 3\nI1116 09:50:54.637718 1941 log.go:181] (0xc00012afd0) (0xc000e10000) Stream removed, broadcasting: 5\n" Nov 16 09:50:54.644: INFO: stdout: "" Nov 16 09:50:54.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-8884 execpod-affinity6jbj8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32267' Nov 16 09:50:54.859: INFO: stderr: "I1116 09:50:54.783774 1959 log.go:181] (0xc000d040b0) (0xc0006dcdc0) Create stream\nI1116 09:50:54.783850 1959 log.go:181] (0xc000d040b0) (0xc0006dcdc0) Stream added, broadcasting: 1\nI1116 09:50:54.786502 1959 log.go:181] (0xc000d040b0) Reply frame received for 1\nI1116 09:50:54.786537 1959 log.go:181] (0xc000d040b0) (0xc000a88640) Create stream\nI1116 09:50:54.786544 1959 log.go:181] (0xc000d040b0) (0xc000a88640) Stream added, broadcasting: 3\nI1116 09:50:54.787364 1959 log.go:181] (0xc000d040b0) Reply frame received for 3\nI1116 09:50:54.787419 1959 log.go:181] (0xc000d040b0) (0xc0006dce60) Create stream\nI1116 09:50:54.787436 1959 log.go:181] (0xc000d040b0) (0xc0006dce60) Stream added, broadcasting: 5\nI1116 09:50:54.788209 1959 log.go:181] (0xc000d040b0) Reply frame received for 5\nI1116 09:50:54.851327 1959 log.go:181] (0xc000d040b0) Data frame received for 5\nI1116 09:50:54.851371 1959 log.go:181] (0xc0006dce60) (5) Data frame handling\nI1116 09:50:54.851386 1959 log.go:181] (0xc0006dce60) (5) Data frame sent\nI1116 09:50:54.851395 1959 log.go:181] (0xc000d040b0) Data frame received for 5\nI1116 09:50:54.851404 1959 log.go:181] (0xc0006dce60) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32267\nConnection to 172.18.0.14 32267 port [tcp/32267] succeeded!\nI1116 09:50:54.851444 1959 log.go:181] (0xc000d040b0) Data frame received for 3\nI1116 09:50:54.851455 1959 log.go:181] (0xc000a88640) (3) Data frame handling\nI1116 09:50:54.853061 1959 log.go:181] (0xc000d040b0) Data frame received for 1\nI1116 09:50:54.853098 1959 log.go:181] (0xc0006dcdc0) (1) Data frame handling\nI1116 09:50:54.853136 1959 log.go:181] (0xc0006dcdc0) (1) Data frame sent\nI1116 09:50:54.853162 1959 log.go:181] (0xc000d040b0) (0xc0006dcdc0) Stream removed, broadcasting: 1\nI1116 09:50:54.853185 1959 log.go:181] (0xc000d040b0) Go away received\nI1116 09:50:54.853520 1959 log.go:181] (0xc000d040b0) (0xc0006dcdc0) Stream removed, broadcasting: 1\nI1116 09:50:54.853536 1959 log.go:181] (0xc000d040b0) (0xc000a88640) Stream removed, broadcasting: 3\nI1116 09:50:54.853543 1959 log.go:181] (0xc000d040b0) (0xc0006dce60) Stream removed, broadcasting: 5\n" Nov 16 09:50:54.859: INFO: stdout: "" Nov 16 09:50:54.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-8884 execpod-affinity6jbj8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:32267/ ; done' Nov 16 09:50:55.244: INFO: stderr: "I1116 09:50:55.009246 1976 log.go:181] (0xc000a46fd0) (0xc0000ef220) Create stream\nI1116 09:50:55.009305 1976 log.go:181] (0xc000a46fd0) (0xc0000ef220) Stream added, broadcasting: 1\nI1116 09:50:55.013156 1976 log.go:181] (0xc000a46fd0) Reply frame received for 1\nI1116 09:50:55.013199 1976 log.go:181] (0xc000a46fd0) (0xc0000ee1e0) Create stream\nI1116 09:50:55.013213 1976 log.go:181] (0xc000a46fd0) (0xc0000ee1e0) Stream added, broadcasting: 3\nI1116 09:50:55.014150 1976 log.go:181] (0xc000a46fd0) Reply frame received for 3\nI1116 09:50:55.014192 1976 log.go:181] (0xc000a46fd0) (0xc0002abe00) Create stream\nI1116 09:50:55.014208 1976 log.go:181] (0xc000a46fd0) (0xc0002abe00) Stream added, broadcasting: 5\nI1116 09:50:55.015022 1976 log.go:181] (0xc000a46fd0) Reply frame received for 5\nI1116 09:50:55.094771 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.094813 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.094839 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.094924 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.094949 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.094970 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.137398 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.137428 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.137450 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.137744 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.137779 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.137810 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.137822 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.137839 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.137850 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.145466 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.145479 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.145487 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.145844 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.145856 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.145861 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.145997 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.146023 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.146055 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.153333 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.153364 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.153380 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.154242 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.154253 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.154259 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.154359 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.154384 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.154402 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.159955 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.159971 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.159981 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.160512 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.160530 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.160542 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.160660 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.160670 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.160679 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.168133 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.168146 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.168155 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.168673 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.168685 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.168693 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.168731 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.168763 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.168786 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.175365 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.175389 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.175404 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.176055 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.176067 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.176073 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.176168 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.176192 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.176213 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.181483 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.181510 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.181532 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.182290 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.182320 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.182329 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.182356 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.182387 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.182415 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\nI1116 09:50:55.182433 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.182446 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.182476 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\nI1116 09:50:55.188045 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.188072 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.188098 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.188716 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.188740 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.188750 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.188771 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.188795 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.188820 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.195344 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.195376 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.195408 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.196352 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.196376 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.196386 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.196457 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.196481 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.196502 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.200333 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.200364 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.200385 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.200825 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.200977 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.200992 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.201006 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.201014 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.201021 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.206792 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.206809 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.206832 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.207359 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.207390 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.207402 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.207415 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.207423 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.207430 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.212416 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.212452 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.212486 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.212583 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.212613 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.212639 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.212668 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.212696 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.212717 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.218465 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.218480 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.218492 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.219382 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.219425 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.219458 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.219481 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.219491 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.219506 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.223883 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.223895 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.223901 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.224416 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.224429 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.224449 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.224485 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.224507 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.224544 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.228224 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.228234 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.228240 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.229379 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.229431 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.229450 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.229480 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.229490 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.229502 1976 log.go:181] (0xc0002abe00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.235011 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.235024 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.235031 1976 log.go:181] (0xc0000ee1e0) (3) Data frame sent\nI1116 09:50:55.235992 1976 log.go:181] (0xc000a46fd0) Data frame received for 3\nI1116 09:50:55.236018 1976 log.go:181] (0xc0000ee1e0) (3) Data frame handling\nI1116 09:50:55.236044 1976 log.go:181] (0xc000a46fd0) Data frame received for 5\nI1116 09:50:55.236073 1976 log.go:181] (0xc0002abe00) (5) Data frame handling\nI1116 09:50:55.238012 1976 log.go:181] (0xc000a46fd0) Data frame received for 1\nI1116 09:50:55.238030 1976 log.go:181] (0xc0000ef220) (1) Data frame handling\nI1116 09:50:55.238042 1976 log.go:181] (0xc0000ef220) (1) Data frame sent\nI1116 09:50:55.238053 1976 log.go:181] (0xc000a46fd0) (0xc0000ef220) Stream removed, broadcasting: 1\nI1116 09:50:55.238191 1976 log.go:181] (0xc000a46fd0) Go away received\nI1116 09:50:55.238396 1976 log.go:181] (0xc000a46fd0) (0xc0000ef220) Stream removed, broadcasting: 1\nI1116 09:50:55.238417 1976 log.go:181] (0xc000a46fd0) (0xc0000ee1e0) Stream removed, broadcasting: 3\nI1116 09:50:55.238429 1976 log.go:181] (0xc000a46fd0) (0xc0002abe00) Stream removed, broadcasting: 5\n" Nov 16 09:50:55.245: INFO: stdout: "\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-8dkg7\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-8dkg7\naffinity-nodeport-transition-9r7tl\naffinity-nodeport-transition-8dkg7\naffinity-nodeport-transition-9r7tl\naffinity-nodeport-transition-9r7tl\naffinity-nodeport-transition-9r7tl\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-9r7tl\naffinity-nodeport-transition-8dkg7\naffinity-nodeport-transition-9r7tl\naffinity-nodeport-transition-8dkg7\naffinity-nodeport-transition-9r7tl" Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-8dkg7 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-8dkg7 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-9r7tl Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-8dkg7 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-9r7tl Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-9r7tl Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-9r7tl Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-9r7tl Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-8dkg7 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-9r7tl Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-8dkg7 Nov 16 09:50:55.245: INFO: Received response from host: affinity-nodeport-transition-9r7tl Nov 16 09:50:55.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-8884 execpod-affinity6jbj8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:32267/ ; done' Nov 16 09:50:55.587: INFO: stderr: "I1116 09:50:55.391246 1994 log.go:181] (0xc000c8f290) (0xc000c86500) Create stream\nI1116 09:50:55.391299 1994 log.go:181] (0xc000c8f290) (0xc000c86500) Stream added, broadcasting: 1\nI1116 09:50:55.396775 1994 log.go:181] (0xc000c8f290) Reply frame received for 1\nI1116 09:50:55.396820 1994 log.go:181] (0xc000c8f290) (0xc000b2a000) Create stream\nI1116 09:50:55.396934 1994 log.go:181] (0xc000c8f290) (0xc000b2a000) Stream added, broadcasting: 3\nI1116 09:50:55.398026 1994 log.go:181] (0xc000c8f290) Reply frame received for 3\nI1116 09:50:55.398079 1994 log.go:181] (0xc000c8f290) (0xc000c86000) Create stream\nI1116 09:50:55.398097 1994 log.go:181] (0xc000c8f290) (0xc000c86000) Stream added, broadcasting: 5\nI1116 09:50:55.399040 1994 log.go:181] (0xc000c8f290) Reply frame received for 5\nI1116 09:50:55.475939 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.475967 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.475974 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.476000 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.476022 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.476041 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.480688 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.480709 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.480726 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.481850 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.481881 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.481896 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.481909 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.481917 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.481930 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.486707 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.486727 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.486746 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.487689 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.487708 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.487717 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.487730 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.487738 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.487753 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.494881 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.494895 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.494903 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.495795 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.495832 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.495864 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.495899 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.495913 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.495933 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.499521 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.499538 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.499547 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.500454 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.500481 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.500496 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.500515 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.500526 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.500537 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.508228 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.508247 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.508262 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.509316 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.509339 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.509372 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.509386 1994 log.go:181] (0xc000c86000) (5) Data frame sent\nI1116 09:50:55.509398 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.509409 1994 log.go:181] (0xc000c86000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.509436 1994 log.go:181] (0xc000c86000) (5) Data frame sent\nI1116 09:50:55.509450 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.509464 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.514320 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.514346 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.514365 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.515101 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.515120 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.515140 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.515182 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.515205 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.515241 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.522162 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.522187 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.522212 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.522481 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.522506 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.522519 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.522538 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.522552 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.522566 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.527679 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.527699 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.527710 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.528314 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.528343 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.528357 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.528378 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.528397 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.528425 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.534869 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.534898 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.534925 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.535406 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.535432 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.535446 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.535462 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.535472 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.535481 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.542690 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.542712 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.542728 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.543184 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.543200 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.543213 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.543226 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.543242 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.543256 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.558079 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.558116 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.558135 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.558159 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.558169 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.558178 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.558189 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.558219 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.558237 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.560317 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.560333 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.560343 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.561384 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.561406 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.561416 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.561423 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.561448 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.561474 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.565615 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.565633 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.565649 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.566070 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.566090 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.566101 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.566154 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.566163 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.566169 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.570018 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.570032 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.570046 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.570532 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.570543 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.570549 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.570561 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.570569 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.570579 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.574042 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.574060 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.574070 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.574501 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.574523 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.574530 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.574541 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.574546 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.574550 1994 log.go:181] (0xc000c86000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32267/\nI1116 09:50:55.578206 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.578223 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.578238 1994 log.go:181] (0xc000b2a000) (3) Data frame sent\nI1116 09:50:55.578809 1994 log.go:181] (0xc000c8f290) Data frame received for 5\nI1116 09:50:55.578827 1994 log.go:181] (0xc000c86000) (5) Data frame handling\nI1116 09:50:55.578886 1994 log.go:181] (0xc000c8f290) Data frame received for 3\nI1116 09:50:55.578898 1994 log.go:181] (0xc000b2a000) (3) Data frame handling\nI1116 09:50:55.580297 1994 log.go:181] (0xc000c8f290) Data frame received for 1\nI1116 09:50:55.580325 1994 log.go:181] (0xc000c86500) (1) Data frame handling\nI1116 09:50:55.580341 1994 log.go:181] (0xc000c86500) (1) Data frame sent\nI1116 09:50:55.580353 1994 log.go:181] (0xc000c8f290) (0xc000c86500) Stream removed, broadcasting: 1\nI1116 09:50:55.580372 1994 log.go:181] (0xc000c8f290) Go away received\nI1116 09:50:55.580702 1994 log.go:181] (0xc000c8f290) (0xc000c86500) Stream removed, broadcasting: 1\nI1116 09:50:55.580713 1994 log.go:181] (0xc000c8f290) (0xc000b2a000) Stream removed, broadcasting: 3\nI1116 09:50:55.580719 1994 log.go:181] (0xc000c8f290) (0xc000c86000) Stream removed, broadcasting: 5\n" Nov 16 09:50:55.587: INFO: stdout: "\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5\naffinity-nodeport-transition-2t4g5" Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Received response from host: affinity-nodeport-transition-2t4g5 Nov 16 09:50:55.587: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-8884, will wait for the garbage collector to delete the pods Nov 16 09:50:55.715: INFO: Deleting ReplicationController affinity-nodeport-transition took: 13.360799ms Nov 16 09:50:56.115: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.188839ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:51:05.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8884" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:26.185 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":152,"skipped":2675,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:51:05.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-630 Nov 16 09:51:09.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-630 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 16 09:51:10.177: INFO: stderr: "I1116 09:51:10.077686 2012 log.go:181] (0xc000dc6e70) (0xc000daa820) Create stream\nI1116 09:51:10.077745 2012 log.go:181] (0xc000dc6e70) (0xc000daa820) Stream added, broadcasting: 1\nI1116 09:51:10.082954 2012 log.go:181] (0xc000dc6e70) Reply frame received for 1\nI1116 09:51:10.083000 2012 log.go:181] (0xc000dc6e70) (0xc000daa000) Create stream\nI1116 09:51:10.083016 2012 log.go:181] (0xc000dc6e70) (0xc000daa000) Stream added, broadcasting: 3\nI1116 09:51:10.084143 2012 log.go:181] (0xc000dc6e70) Reply frame received for 3\nI1116 09:51:10.084169 2012 log.go:181] (0xc000dc6e70) (0xc000e10000) Create stream\nI1116 09:51:10.084176 2012 log.go:181] (0xc000dc6e70) (0xc000e10000) Stream added, broadcasting: 5\nI1116 09:51:10.085171 2012 log.go:181] (0xc000dc6e70) Reply frame received for 5\nI1116 09:51:10.160747 2012 log.go:181] (0xc000dc6e70) Data frame received for 5\nI1116 09:51:10.160780 2012 log.go:181] (0xc000e10000) (5) Data frame handling\nI1116 09:51:10.160801 2012 log.go:181] (0xc000e10000) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1116 09:51:10.166266 2012 log.go:181] (0xc000dc6e70) Data frame received for 3\nI1116 09:51:10.166292 2012 log.go:181] (0xc000daa000) (3) Data frame handling\nI1116 09:51:10.166310 2012 log.go:181] (0xc000daa000) (3) Data frame sent\nI1116 09:51:10.166961 2012 log.go:181] (0xc000dc6e70) Data frame received for 3\nI1116 09:51:10.166998 2012 log.go:181] (0xc000daa000) (3) Data frame handling\nI1116 09:51:10.167342 2012 log.go:181] (0xc000dc6e70) Data frame received for 5\nI1116 09:51:10.167359 2012 log.go:181] (0xc000e10000) (5) Data frame handling\nI1116 09:51:10.169382 2012 log.go:181] (0xc000dc6e70) Data frame received for 1\nI1116 09:51:10.169410 2012 log.go:181] (0xc000daa820) (1) Data frame handling\nI1116 09:51:10.169428 2012 log.go:181] (0xc000daa820) (1) Data frame sent\nI1116 09:51:10.169455 2012 log.go:181] (0xc000dc6e70) (0xc000daa820) Stream removed, broadcasting: 1\nI1116 09:51:10.169495 2012 log.go:181] (0xc000dc6e70) Go away received\nI1116 09:51:10.169823 2012 log.go:181] (0xc000dc6e70) (0xc000daa820) Stream removed, broadcasting: 1\nI1116 09:51:10.169837 2012 log.go:181] (0xc000dc6e70) (0xc000daa000) Stream removed, broadcasting: 3\nI1116 09:51:10.169842 2012 log.go:181] (0xc000dc6e70) (0xc000e10000) Stream removed, broadcasting: 5\n" Nov 16 09:51:10.177: INFO: stdout: "iptables" Nov 16 09:51:10.177: INFO: proxyMode: iptables Nov 16 09:51:10.182: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 16 09:51:10.210: INFO: Pod kube-proxy-mode-detector still exists Nov 16 09:51:12.210: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 16 09:51:12.214: INFO: Pod kube-proxy-mode-detector still exists Nov 16 09:51:14.211: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 16 09:51:14.215: INFO: Pod kube-proxy-mode-detector still exists Nov 16 09:51:16.211: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 16 09:51:16.216: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-630 STEP: creating replication controller affinity-clusterip-timeout in namespace services-630 I1116 09:51:16.294203 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-630, replica count: 3 I1116 09:51:19.344608 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 09:51:22.345013 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 09:51:22.352: INFO: Creating new exec pod Nov 16 09:51:27.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-630 execpod-affinity2qqvz -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Nov 16 09:51:27.605: INFO: stderr: "I1116 09:51:27.503787 2030 log.go:181] (0xc000e3c210) (0xc000af4820) Create stream\nI1116 09:51:27.503876 2030 log.go:181] (0xc000e3c210) (0xc000af4820) Stream added, broadcasting: 1\nI1116 09:51:27.508740 2030 log.go:181] (0xc000e3c210) Reply frame received for 1\nI1116 09:51:27.508813 2030 log.go:181] (0xc000e3c210) (0xc000af4000) Create stream\nI1116 09:51:27.508939 2030 log.go:181] (0xc000e3c210) (0xc000af4000) Stream added, broadcasting: 3\nI1116 09:51:27.510128 2030 log.go:181] (0xc000e3c210) Reply frame received for 3\nI1116 09:51:27.510169 2030 log.go:181] (0xc000e3c210) (0xc0000fa460) Create stream\nI1116 09:51:27.510179 2030 log.go:181] (0xc000e3c210) (0xc0000fa460) Stream added, broadcasting: 5\nI1116 09:51:27.511304 2030 log.go:181] (0xc000e3c210) Reply frame received for 5\nI1116 09:51:27.595922 2030 log.go:181] (0xc000e3c210) Data frame received for 5\nI1116 09:51:27.595985 2030 log.go:181] (0xc0000fa460) (5) Data frame handling\nI1116 09:51:27.595998 2030 log.go:181] (0xc0000fa460) (5) Data frame sent\nI1116 09:51:27.596007 2030 log.go:181] (0xc000e3c210) Data frame received for 5\nI1116 09:51:27.596016 2030 log.go:181] (0xc0000fa460) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI1116 09:51:27.596054 2030 log.go:181] (0xc000e3c210) Data frame received for 3\nI1116 09:51:27.596102 2030 log.go:181] (0xc000af4000) (3) Data frame handling\nI1116 09:51:27.597533 2030 log.go:181] (0xc000e3c210) Data frame received for 1\nI1116 09:51:27.597563 2030 log.go:181] (0xc000af4820) (1) Data frame handling\nI1116 09:51:27.597586 2030 log.go:181] (0xc000af4820) (1) Data frame sent\nI1116 09:51:27.597604 2030 log.go:181] (0xc000e3c210) (0xc000af4820) Stream removed, broadcasting: 1\nI1116 09:51:27.597620 2030 log.go:181] (0xc000e3c210) Go away received\nI1116 09:51:27.598000 2030 log.go:181] (0xc000e3c210) (0xc000af4820) Stream removed, broadcasting: 1\nI1116 09:51:27.598022 2030 log.go:181] (0xc000e3c210) (0xc000af4000) Stream removed, broadcasting: 3\nI1116 09:51:27.598037 2030 log.go:181] (0xc000e3c210) (0xc0000fa460) Stream removed, broadcasting: 5\n" Nov 16 09:51:27.605: INFO: stdout: "" Nov 16 09:51:27.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-630 execpod-affinity2qqvz -- /bin/sh -x -c nc -zv -t -w 2 10.111.122.98 80' Nov 16 09:51:27.803: INFO: stderr: "I1116 09:51:27.718361 2048 log.go:181] (0xc000e2adc0) (0xc0003d8a00) Create stream\nI1116 09:51:27.718399 2048 log.go:181] (0xc000e2adc0) (0xc0003d8a00) Stream added, broadcasting: 1\nI1116 09:51:27.722892 2048 log.go:181] (0xc000e2adc0) Reply frame received for 1\nI1116 09:51:27.722937 2048 log.go:181] (0xc000e2adc0) (0xc0003d83c0) Create stream\nI1116 09:51:27.722949 2048 log.go:181] (0xc000e2adc0) (0xc0003d83c0) Stream added, broadcasting: 3\nI1116 09:51:27.723910 2048 log.go:181] (0xc000e2adc0) Reply frame received for 3\nI1116 09:51:27.723980 2048 log.go:181] (0xc000e2adc0) (0xc000a28140) Create stream\nI1116 09:51:27.724009 2048 log.go:181] (0xc000e2adc0) (0xc000a28140) Stream added, broadcasting: 5\nI1116 09:51:27.725810 2048 log.go:181] (0xc000e2adc0) Reply frame received for 5\nI1116 09:51:27.795256 2048 log.go:181] (0xc000e2adc0) Data frame received for 5\nI1116 09:51:27.795283 2048 log.go:181] (0xc000a28140) (5) Data frame handling\nI1116 09:51:27.795292 2048 log.go:181] (0xc000a28140) (5) Data frame sent\nI1116 09:51:27.795298 2048 log.go:181] (0xc000e2adc0) Data frame received for 5\nI1116 09:51:27.795304 2048 log.go:181] (0xc000a28140) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.122.98 80\nConnection to 10.111.122.98 80 port [tcp/http] succeeded!\nI1116 09:51:27.795315 2048 log.go:181] (0xc000e2adc0) Data frame received for 3\nI1116 09:51:27.795375 2048 log.go:181] (0xc0003d83c0) (3) Data frame handling\nI1116 09:51:27.796805 2048 log.go:181] (0xc000e2adc0) Data frame received for 1\nI1116 09:51:27.796820 2048 log.go:181] (0xc0003d8a00) (1) Data frame handling\nI1116 09:51:27.796829 2048 log.go:181] (0xc0003d8a00) (1) Data frame sent\nI1116 09:51:27.796956 2048 log.go:181] (0xc000e2adc0) (0xc0003d8a00) Stream removed, broadcasting: 1\nI1116 09:51:27.796974 2048 log.go:181] (0xc000e2adc0) Go away received\nI1116 09:51:27.797319 2048 log.go:181] (0xc000e2adc0) (0xc0003d8a00) Stream removed, broadcasting: 1\nI1116 09:51:27.797336 2048 log.go:181] (0xc000e2adc0) (0xc0003d83c0) Stream removed, broadcasting: 3\nI1116 09:51:27.797344 2048 log.go:181] (0xc000e2adc0) (0xc000a28140) Stream removed, broadcasting: 5\n" Nov 16 09:51:27.803: INFO: stdout: "" Nov 16 09:51:27.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-630 execpod-affinity2qqvz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.122.98:80/ ; done' Nov 16 09:51:28.121: INFO: stderr: "I1116 09:51:27.939031 2066 log.go:181] (0xc000e80630) (0xc000c18500) Create stream\nI1116 09:51:27.939093 2066 log.go:181] (0xc000e80630) (0xc000c18500) Stream added, broadcasting: 1\nI1116 09:51:27.941225 2066 log.go:181] (0xc000e80630) Reply frame received for 1\nI1116 09:51:27.941253 2066 log.go:181] (0xc000e80630) (0xc0005f8000) Create stream\nI1116 09:51:27.941261 2066 log.go:181] (0xc000e80630) (0xc0005f8000) Stream added, broadcasting: 3\nI1116 09:51:27.942028 2066 log.go:181] (0xc000e80630) Reply frame received for 3\nI1116 09:51:27.942061 2066 log.go:181] (0xc000e80630) (0xc00053e820) Create stream\nI1116 09:51:27.942075 2066 log.go:181] (0xc000e80630) (0xc00053e820) Stream added, broadcasting: 5\nI1116 09:51:27.942738 2066 log.go:181] (0xc000e80630) Reply frame received for 5\nI1116 09:51:28.004186 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.004233 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.004247 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.004273 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.004283 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.004294 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.009104 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.009141 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.009166 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.009924 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.009961 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.010020 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.010044 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.010074 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.010120 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.016308 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.016329 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.016366 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.016767 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.016789 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.016801 2066 log.go:181] (0xc00053e820) (5) Data frame sent\nI1116 09:51:28.016811 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.016820 2066 log.go:181] (0xc00053e820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.016991 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.017014 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.017029 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.017062 2066 log.go:181] (0xc00053e820) (5) Data frame sent\nI1116 09:51:28.024000 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.024023 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.024044 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.024826 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.024932 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.024944 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.024978 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.025003 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.025027 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.030804 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.030815 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.030821 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.031266 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.031279 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.031287 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\nI1116 09:51:28.031350 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.031373 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.031389 2066 log.go:181] (0xc00053e820) (5) Data frame sent\nI1116 09:51:28.031406 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.031421 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.031443 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.037100 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.037120 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.037140 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.037898 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.037920 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.037930 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.037942 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.037949 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.037956 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.043527 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.043553 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.043574 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.044417 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.044446 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.044468 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.045600 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.045616 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.045629 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.049813 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.049839 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.049865 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.050222 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.050243 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.050253 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.050265 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.050272 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.050295 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.056497 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.056548 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.056570 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.057348 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.057370 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.057381 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.057405 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.057424 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.057434 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.061838 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.061856 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.061866 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.062531 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.062589 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.062602 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.062622 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.062633 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.062643 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.069469 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.069484 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.069494 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.070392 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.070412 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.070424 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.070445 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.070482 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.070500 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.076720 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.076746 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.076765 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.077440 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.077487 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.077524 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.077546 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.077558 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.077571 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.083168 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.083191 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.083208 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.084043 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.084061 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.084073 2066 log.go:181] (0xc00053e820) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.084086 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.084132 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.084181 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.091310 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.091328 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.091339 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.092234 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.092256 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.092270 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.092298 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.092337 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.092382 2066 log.go:181] (0xc00053e820) (5) Data frame sent\nI1116 09:51:28.092410 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.092431 2066 log.go:181] (0xc00053e820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.092477 2066 log.go:181] (0xc00053e820) (5) Data frame sent\nI1116 09:51:28.097954 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.097984 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.098014 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.098913 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.098943 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.098956 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.098974 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.098984 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.098994 2066 log.go:181] (0xc00053e820) (5) Data frame sent\nI1116 09:51:28.099004 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.099015 2066 log.go:181] (0xc00053e820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.099040 2066 log.go:181] (0xc00053e820) (5) Data frame sent\nI1116 09:51:28.105360 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.105377 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.105393 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.105807 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.105823 2066 log.go:181] (0xc00053e820) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.105844 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.105878 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.105985 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.106022 2066 log.go:181] (0xc00053e820) (5) Data frame sent\nI1116 09:51:28.110367 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.110390 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.110415 2066 log.go:181] (0xc0005f8000) (3) Data frame sent\nI1116 09:51:28.111420 2066 log.go:181] (0xc000e80630) Data frame received for 5\nI1116 09:51:28.111468 2066 log.go:181] (0xc00053e820) (5) Data frame handling\nI1116 09:51:28.111507 2066 log.go:181] (0xc000e80630) Data frame received for 3\nI1116 09:51:28.111530 2066 log.go:181] (0xc0005f8000) (3) Data frame handling\nI1116 09:51:28.113838 2066 log.go:181] (0xc000e80630) Data frame received for 1\nI1116 09:51:28.113886 2066 log.go:181] (0xc000c18500) (1) Data frame handling\nI1116 09:51:28.113916 2066 log.go:181] (0xc000c18500) (1) Data frame sent\nI1116 09:51:28.113942 2066 log.go:181] (0xc000e80630) (0xc000c18500) Stream removed, broadcasting: 1\nI1116 09:51:28.113981 2066 log.go:181] (0xc000e80630) Go away received\nI1116 09:51:28.114468 2066 log.go:181] (0xc000e80630) (0xc000c18500) Stream removed, broadcasting: 1\nI1116 09:51:28.114497 2066 log.go:181] (0xc000e80630) (0xc0005f8000) Stream removed, broadcasting: 3\nI1116 09:51:28.114511 2066 log.go:181] (0xc000e80630) (0xc00053e820) Stream removed, broadcasting: 5\n" Nov 16 09:51:28.122: INFO: stdout: "\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl\naffinity-clusterip-timeout-jngzl" Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Received response from host: affinity-clusterip-timeout-jngzl Nov 16 09:51:28.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-630 execpod-affinity2qqvz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.122.98:80/' Nov 16 09:51:28.334: INFO: stderr: "I1116 09:51:28.253803 2084 log.go:181] (0xc0009dcdc0) (0xc0003d4b40) Create stream\nI1116 09:51:28.253861 2084 log.go:181] (0xc0009dcdc0) (0xc0003d4b40) Stream added, broadcasting: 1\nI1116 09:51:28.258962 2084 log.go:181] (0xc0009dcdc0) Reply frame received for 1\nI1116 09:51:28.259004 2084 log.go:181] (0xc0009dcdc0) (0xc00099e280) Create stream\nI1116 09:51:28.259015 2084 log.go:181] (0xc0009dcdc0) (0xc00099e280) Stream added, broadcasting: 3\nI1116 09:51:28.259939 2084 log.go:181] (0xc0009dcdc0) Reply frame received for 3\nI1116 09:51:28.259986 2084 log.go:181] (0xc0009dcdc0) (0xc0003085a0) Create stream\nI1116 09:51:28.260001 2084 log.go:181] (0xc0009dcdc0) (0xc0003085a0) Stream added, broadcasting: 5\nI1116 09:51:28.261176 2084 log.go:181] (0xc0009dcdc0) Reply frame received for 5\nI1116 09:51:28.318636 2084 log.go:181] (0xc0009dcdc0) Data frame received for 5\nI1116 09:51:28.318665 2084 log.go:181] (0xc0003085a0) (5) Data frame handling\nI1116 09:51:28.318684 2084 log.go:181] (0xc0003085a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:28.323454 2084 log.go:181] (0xc0009dcdc0) Data frame received for 3\nI1116 09:51:28.323482 2084 log.go:181] (0xc00099e280) (3) Data frame handling\nI1116 09:51:28.323509 2084 log.go:181] (0xc00099e280) (3) Data frame sent\nI1116 09:51:28.324595 2084 log.go:181] (0xc0009dcdc0) Data frame received for 5\nI1116 09:51:28.324625 2084 log.go:181] (0xc0003085a0) (5) Data frame handling\nI1116 09:51:28.325272 2084 log.go:181] (0xc0009dcdc0) Data frame received for 3\nI1116 09:51:28.325294 2084 log.go:181] (0xc00099e280) (3) Data frame handling\nI1116 09:51:28.326945 2084 log.go:181] (0xc0009dcdc0) Data frame received for 1\nI1116 09:51:28.326958 2084 log.go:181] (0xc0003d4b40) (1) Data frame handling\nI1116 09:51:28.326972 2084 log.go:181] (0xc0003d4b40) (1) Data frame sent\nI1116 09:51:28.326983 2084 log.go:181] (0xc0009dcdc0) (0xc0003d4b40) Stream removed, broadcasting: 1\nI1116 09:51:28.327249 2084 log.go:181] (0xc0009dcdc0) Go away received\nI1116 09:51:28.327292 2084 log.go:181] (0xc0009dcdc0) (0xc0003d4b40) Stream removed, broadcasting: 1\nI1116 09:51:28.327341 2084 log.go:181] (0xc0009dcdc0) (0xc00099e280) Stream removed, broadcasting: 3\nI1116 09:51:28.327361 2084 log.go:181] (0xc0009dcdc0) (0xc0003085a0) Stream removed, broadcasting: 5\n" Nov 16 09:51:28.334: INFO: stdout: "affinity-clusterip-timeout-jngzl" Nov 16 09:51:43.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-630 execpod-affinity2qqvz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.122.98:80/' Nov 16 09:51:43.563: INFO: stderr: "I1116 09:51:43.473494 2102 log.go:181] (0xc000db4f20) (0xc00040f7c0) Create stream\nI1116 09:51:43.473573 2102 log.go:181] (0xc000db4f20) (0xc00040f7c0) Stream added, broadcasting: 1\nI1116 09:51:43.478089 2102 log.go:181] (0xc000db4f20) Reply frame received for 1\nI1116 09:51:43.478130 2102 log.go:181] (0xc000db4f20) (0xc000308000) Create stream\nI1116 09:51:43.478140 2102 log.go:181] (0xc000db4f20) (0xc000308000) Stream added, broadcasting: 3\nI1116 09:51:43.478934 2102 log.go:181] (0xc000db4f20) Reply frame received for 3\nI1116 09:51:43.478961 2102 log.go:181] (0xc000db4f20) (0xc000c140a0) Create stream\nI1116 09:51:43.478969 2102 log.go:181] (0xc000db4f20) (0xc000c140a0) Stream added, broadcasting: 5\nI1116 09:51:43.479959 2102 log.go:181] (0xc000db4f20) Reply frame received for 5\nI1116 09:51:43.545435 2102 log.go:181] (0xc000db4f20) Data frame received for 5\nI1116 09:51:43.545473 2102 log.go:181] (0xc000c140a0) (5) Data frame handling\nI1116 09:51:43.545502 2102 log.go:181] (0xc000c140a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:43.551453 2102 log.go:181] (0xc000db4f20) Data frame received for 3\nI1116 09:51:43.551472 2102 log.go:181] (0xc000308000) (3) Data frame handling\nI1116 09:51:43.551498 2102 log.go:181] (0xc000308000) (3) Data frame sent\nI1116 09:51:43.551862 2102 log.go:181] (0xc000db4f20) Data frame received for 3\nI1116 09:51:43.551903 2102 log.go:181] (0xc000308000) (3) Data frame handling\nI1116 09:51:43.552122 2102 log.go:181] (0xc000db4f20) Data frame received for 5\nI1116 09:51:43.552157 2102 log.go:181] (0xc000c140a0) (5) Data frame handling\nI1116 09:51:43.557692 2102 log.go:181] (0xc000db4f20) Data frame received for 1\nI1116 09:51:43.557708 2102 log.go:181] (0xc00040f7c0) (1) Data frame handling\nI1116 09:51:43.557714 2102 log.go:181] (0xc00040f7c0) (1) Data frame sent\nI1116 09:51:43.557727 2102 log.go:181] (0xc000db4f20) (0xc00040f7c0) Stream removed, broadcasting: 1\nI1116 09:51:43.557757 2102 log.go:181] (0xc000db4f20) Go away received\nI1116 09:51:43.558018 2102 log.go:181] (0xc000db4f20) (0xc00040f7c0) Stream removed, broadcasting: 1\nI1116 09:51:43.558032 2102 log.go:181] (0xc000db4f20) (0xc000308000) Stream removed, broadcasting: 3\nI1116 09:51:43.558037 2102 log.go:181] (0xc000db4f20) (0xc000c140a0) Stream removed, broadcasting: 5\n" Nov 16 09:51:43.563: INFO: stdout: "affinity-clusterip-timeout-jngzl" Nov 16 09:51:58.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-630 execpod-affinity2qqvz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.122.98:80/' Nov 16 09:51:58.771: INFO: stderr: "I1116 09:51:58.706740 2120 log.go:181] (0xc00003a160) (0xc00098e1e0) Create stream\nI1116 09:51:58.706801 2120 log.go:181] (0xc00003a160) (0xc00098e1e0) Stream added, broadcasting: 1\nI1116 09:51:58.709587 2120 log.go:181] (0xc00003a160) Reply frame received for 1\nI1116 09:51:58.709643 2120 log.go:181] (0xc00003a160) (0xc000a72960) Create stream\nI1116 09:51:58.709662 2120 log.go:181] (0xc00003a160) (0xc000a72960) Stream added, broadcasting: 3\nI1116 09:51:58.710736 2120 log.go:181] (0xc00003a160) Reply frame received for 3\nI1116 09:51:58.710752 2120 log.go:181] (0xc00003a160) (0xc000a72e60) Create stream\nI1116 09:51:58.710760 2120 log.go:181] (0xc00003a160) (0xc000a72e60) Stream added, broadcasting: 5\nI1116 09:51:58.711644 2120 log.go:181] (0xc00003a160) Reply frame received for 5\nI1116 09:51:58.760328 2120 log.go:181] (0xc00003a160) Data frame received for 5\nI1116 09:51:58.760359 2120 log.go:181] (0xc000a72e60) (5) Data frame handling\nI1116 09:51:58.760378 2120 log.go:181] (0xc000a72e60) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.122.98:80/\nI1116 09:51:58.763040 2120 log.go:181] (0xc00003a160) Data frame received for 3\nI1116 09:51:58.763073 2120 log.go:181] (0xc000a72960) (3) Data frame handling\nI1116 09:51:58.763103 2120 log.go:181] (0xc000a72960) (3) Data frame sent\nI1116 09:51:58.763374 2120 log.go:181] (0xc00003a160) Data frame received for 3\nI1116 09:51:58.763406 2120 log.go:181] (0xc000a72960) (3) Data frame handling\nI1116 09:51:58.763624 2120 log.go:181] (0xc00003a160) Data frame received for 5\nI1116 09:51:58.763643 2120 log.go:181] (0xc000a72e60) (5) Data frame handling\nI1116 09:51:58.765010 2120 log.go:181] (0xc00003a160) Data frame received for 1\nI1116 09:51:58.765032 2120 log.go:181] (0xc00098e1e0) (1) Data frame handling\nI1116 09:51:58.765047 2120 log.go:181] (0xc00098e1e0) (1) Data frame sent\nI1116 09:51:58.765063 2120 log.go:181] (0xc00003a160) (0xc00098e1e0) Stream removed, broadcasting: 1\nI1116 09:51:58.765076 2120 log.go:181] (0xc00003a160) Go away received\nI1116 09:51:58.765441 2120 log.go:181] (0xc00003a160) (0xc00098e1e0) Stream removed, broadcasting: 1\nI1116 09:51:58.765459 2120 log.go:181] (0xc00003a160) (0xc000a72960) Stream removed, broadcasting: 3\nI1116 09:51:58.765467 2120 log.go:181] (0xc00003a160) (0xc000a72e60) Stream removed, broadcasting: 5\n" Nov 16 09:51:58.771: INFO: stdout: "affinity-clusterip-timeout-cj695" Nov 16 09:51:58.771: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-630, will wait for the garbage collector to delete the pods Nov 16 09:51:59.483: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 68.855843ms Nov 16 09:51:59.983: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.215825ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:52:16.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-630" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:71.022 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":153,"skipped":2750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:52:16.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:52:28.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7215" for this suite. • [SLOW TEST:11.173 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":154,"skipped":2787,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:52:28.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Nov 16 09:52:28.145: INFO: Waiting up to 1m0s for all nodes to be ready Nov 16 09:53:28.170: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Nov 16 09:53:28.239: INFO: Created pod: pod0-sched-preemption-low-priority Nov 16 09:53:28.266: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:53:48.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6037" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:80.476 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":155,"skipped":2791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:53:48.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:53:50.277: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:53:52.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117230, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117230, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117230, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117230, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:53:55.336: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:53:55.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2684-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:53:56.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8926" for this suite. STEP: Destroying namespace "webhook-8926-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.086 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":156,"skipped":2820,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:53:56.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Nov 16 09:54:02.773: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9426 PodName:pod-sharedvolume-d86806e8-6607-46e5-8809-343766eb7704 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 09:54:02.773: INFO: >>> kubeConfig: /root/.kube/config I1116 09:54:02.815468 7 log.go:181] (0xc00003b340) (0xc004f535e0) Create stream I1116 09:54:02.815501 7 log.go:181] (0xc00003b340) (0xc004f535e0) Stream added, broadcasting: 1 I1116 09:54:02.817856 7 log.go:181] (0xc00003b340) Reply frame received for 1 I1116 09:54:02.817897 7 log.go:181] (0xc00003b340) (0xc000f281e0) Create stream I1116 09:54:02.817912 7 log.go:181] (0xc00003b340) (0xc000f281e0) Stream added, broadcasting: 3 I1116 09:54:02.818874 7 log.go:181] (0xc00003b340) Reply frame received for 3 I1116 09:54:02.818932 7 log.go:181] (0xc00003b340) (0xc0024dbea0) Create stream I1116 09:54:02.818961 7 log.go:181] (0xc00003b340) (0xc0024dbea0) Stream added, broadcasting: 5 I1116 09:54:02.820011 7 log.go:181] (0xc00003b340) Reply frame received for 5 I1116 09:54:02.910102 7 log.go:181] (0xc00003b340) Data frame received for 5 I1116 09:54:02.910156 7 log.go:181] (0xc0024dbea0) (5) Data frame handling I1116 09:54:02.910198 7 log.go:181] (0xc00003b340) Data frame received for 3 I1116 09:54:02.910225 7 log.go:181] (0xc000f281e0) (3) Data frame handling I1116 09:54:02.910256 7 log.go:181] (0xc000f281e0) (3) Data frame sent I1116 09:54:02.910280 7 log.go:181] (0xc00003b340) Data frame received for 3 I1116 09:54:02.910295 7 log.go:181] (0xc000f281e0) (3) Data frame handling I1116 09:54:02.911877 7 log.go:181] (0xc00003b340) Data frame received for 1 I1116 09:54:02.911909 7 log.go:181] (0xc004f535e0) (1) Data frame handling I1116 09:54:02.911938 7 log.go:181] (0xc004f535e0) (1) Data frame sent I1116 09:54:02.911968 7 log.go:181] (0xc00003b340) (0xc004f535e0) Stream removed, broadcasting: 1 I1116 09:54:02.911995 7 log.go:181] (0xc00003b340) Go away received I1116 09:54:02.912100 7 log.go:181] (0xc00003b340) (0xc004f535e0) Stream removed, broadcasting: 1 I1116 09:54:02.912120 7 log.go:181] (0xc00003b340) (0xc000f281e0) Stream removed, broadcasting: 3 I1116 09:54:02.912133 7 log.go:181] (0xc00003b340) (0xc0024dbea0) Stream removed, broadcasting: 5 Nov 16 09:54:02.912: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:54:02.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9426" for this suite. • [SLOW TEST:6.326 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":157,"skipped":2834,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:54:02.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-7ngt STEP: Creating a pod to test atomic-volume-subpath Nov 16 09:54:03.039: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7ngt" in namespace "subpath-7842" to be "Succeeded or Failed" Nov 16 09:54:03.043: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298334ms Nov 16 09:54:05.048: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008958498s Nov 16 09:54:07.053: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 4.014350538s Nov 16 09:54:09.058: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 6.019371467s Nov 16 09:54:11.062: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 8.023236449s Nov 16 09:54:13.067: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 10.028332714s Nov 16 09:54:15.072: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 12.0332583s Nov 16 09:54:17.077: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 14.038107987s Nov 16 09:54:19.081: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 16.042766385s Nov 16 09:54:21.086: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 18.04751589s Nov 16 09:54:23.091: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 20.052646033s Nov 16 09:54:25.095: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Running", Reason="", readiness=true. Elapsed: 22.056664884s Nov 16 09:54:27.100: INFO: Pod "pod-subpath-test-downwardapi-7ngt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061415468s STEP: Saw pod success Nov 16 09:54:27.100: INFO: Pod "pod-subpath-test-downwardapi-7ngt" satisfied condition "Succeeded or Failed" Nov 16 09:54:27.103: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-7ngt container test-container-subpath-downwardapi-7ngt: STEP: delete the pod Nov 16 09:54:27.175: INFO: Waiting for pod pod-subpath-test-downwardapi-7ngt to disappear Nov 16 09:54:27.181: INFO: Pod pod-subpath-test-downwardapi-7ngt no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7ngt Nov 16 09:54:27.181: INFO: Deleting pod "pod-subpath-test-downwardapi-7ngt" in namespace "subpath-7842" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:54:27.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7842" for this suite. • [SLOW TEST:24.267 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":158,"skipped":2844,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:54:27.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 09:54:27.829: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 09:54:29.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117267, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117267, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117267, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117267, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 09:54:31.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117267, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117267, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117267, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117267, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 09:54:34.870: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:54:35.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6411" for this suite. STEP: Destroying namespace "webhook-6411-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.361 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":159,"skipped":2850,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:54:35.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 09:54:35.617: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Nov 16 09:54:37.683: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 09:54:38.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-750" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":160,"skipped":2866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 09:54:39.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6393 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-6393 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6393 Nov 16 09:54:40.276: INFO: Found 0 stateful pods, waiting for 1 Nov 16 09:54:50.281: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Nov 16 09:54:50.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:54:50.537: INFO: stderr: "I1116 09:54:50.412671 2139 log.go:181] (0xc000d866e0) (0xc0002e6320) Create stream\nI1116 09:54:50.412756 2139 log.go:181] (0xc000d866e0) (0xc0002e6320) Stream added, broadcasting: 1\nI1116 09:54:50.416357 2139 log.go:181] (0xc000d866e0) Reply frame received for 1\nI1116 09:54:50.416402 2139 log.go:181] (0xc000d866e0) (0xc0006aa000) Create stream\nI1116 09:54:50.416417 2139 log.go:181] (0xc000d866e0) (0xc0006aa000) Stream added, broadcasting: 3\nI1116 09:54:50.417264 2139 log.go:181] (0xc000d866e0) Reply frame received for 3\nI1116 09:54:50.417295 2139 log.go:181] (0xc000d866e0) (0xc0002e6000) Create stream\nI1116 09:54:50.417305 2139 log.go:181] (0xc000d866e0) (0xc0002e6000) Stream added, broadcasting: 5\nI1116 09:54:50.418074 2139 log.go:181] (0xc000d866e0) Reply frame received for 5\nI1116 09:54:50.494013 2139 log.go:181] (0xc000d866e0) Data frame received for 5\nI1116 09:54:50.494065 2139 log.go:181] (0xc0002e6000) (5) Data frame handling\nI1116 09:54:50.494103 2139 log.go:181] (0xc0002e6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:54:50.526620 2139 log.go:181] (0xc000d866e0) Data frame received for 3\nI1116 09:54:50.526654 2139 log.go:181] (0xc0006aa000) (3) Data frame handling\nI1116 09:54:50.526674 2139 log.go:181] (0xc0006aa000) (3) Data frame sent\nI1116 09:54:50.526701 2139 log.go:181] (0xc000d866e0) Data frame received for 5\nI1116 09:54:50.526728 2139 log.go:181] (0xc0002e6000) (5) Data frame handling\nI1116 09:54:50.527204 2139 log.go:181] (0xc000d866e0) Data frame received for 3\nI1116 09:54:50.527229 2139 log.go:181] (0xc0006aa000) (3) Data frame handling\nI1116 09:54:50.529028 2139 log.go:181] (0xc000d866e0) Data frame received for 1\nI1116 09:54:50.529041 2139 log.go:181] (0xc0002e6320) (1) Data frame handling\nI1116 09:54:50.529047 2139 log.go:181] (0xc0002e6320) (1) Data frame sent\nI1116 09:54:50.529244 2139 log.go:181] (0xc000d866e0) (0xc0002e6320) Stream removed, broadcasting: 1\nI1116 09:54:50.529341 2139 log.go:181] (0xc000d866e0) Go away received\nI1116 09:54:50.529691 2139 log.go:181] (0xc000d866e0) (0xc0002e6320) Stream removed, broadcasting: 1\nI1116 09:54:50.529718 2139 log.go:181] (0xc000d866e0) (0xc0006aa000) Stream removed, broadcasting: 3\nI1116 09:54:50.529737 2139 log.go:181] (0xc000d866e0) (0xc0002e6000) Stream removed, broadcasting: 5\n" Nov 16 09:54:50.537: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:54:50.537: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:54:50.541: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 16 09:55:00.549: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 16 09:55:00.549: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 09:55:00.563: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:00.563: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:00.564: INFO: Nov 16 09:55:00.564: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 16 09:55:01.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99474643s Nov 16 09:55:02.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989717791s Nov 16 09:55:03.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985006479s Nov 16 09:55:04.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979206864s Nov 16 09:55:05.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973716794s Nov 16 09:55:06.595: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967743699s Nov 16 09:55:07.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962976285s Nov 16 09:55:08.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957616071s Nov 16 09:55:09.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.745781ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6393 Nov 16 09:55:10.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:55:10.847: INFO: stderr: "I1116 09:55:10.765299 2157 log.go:181] (0xc000e1afd0) (0xc000a28820) Create stream\nI1116 09:55:10.765362 2157 log.go:181] (0xc000e1afd0) (0xc000a28820) Stream added, broadcasting: 1\nI1116 09:55:10.775845 2157 log.go:181] (0xc000e1afd0) Reply frame received for 1\nI1116 09:55:10.775896 2157 log.go:181] (0xc000e1afd0) (0xc0006a0280) Create stream\nI1116 09:55:10.775910 2157 log.go:181] (0xc000e1afd0) (0xc0006a0280) Stream added, broadcasting: 3\nI1116 09:55:10.776746 2157 log.go:181] (0xc000e1afd0) Reply frame received for 3\nI1116 09:55:10.776806 2157 log.go:181] (0xc000e1afd0) (0xc000a28000) Create stream\nI1116 09:55:10.776822 2157 log.go:181] (0xc000e1afd0) (0xc000a28000) Stream added, broadcasting: 5\nI1116 09:55:10.777756 2157 log.go:181] (0xc000e1afd0) Reply frame received for 5\nI1116 09:55:10.836340 2157 log.go:181] (0xc000e1afd0) Data frame received for 3\nI1116 09:55:10.836401 2157 log.go:181] (0xc0006a0280) (3) Data frame handling\nI1116 09:55:10.836421 2157 log.go:181] (0xc0006a0280) (3) Data frame sent\nI1116 09:55:10.836451 2157 log.go:181] (0xc000e1afd0) Data frame received for 3\nI1116 09:55:10.836489 2157 log.go:181] (0xc000e1afd0) Data frame received for 5\nI1116 09:55:10.836538 2157 log.go:181] (0xc000a28000) (5) Data frame handling\nI1116 09:55:10.836563 2157 log.go:181] (0xc000a28000) (5) Data frame sent\nI1116 09:55:10.836574 2157 log.go:181] (0xc000e1afd0) Data frame received for 5\nI1116 09:55:10.836583 2157 log.go:181] (0xc000a28000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1116 09:55:10.836625 2157 log.go:181] (0xc0006a0280) (3) Data frame handling\nI1116 09:55:10.838446 2157 log.go:181] (0xc000e1afd0) Data frame received for 1\nI1116 09:55:10.838477 2157 log.go:181] (0xc000a28820) (1) Data frame handling\nI1116 09:55:10.838495 2157 log.go:181] (0xc000a28820) (1) Data frame sent\nI1116 09:55:10.838527 2157 log.go:181] (0xc000e1afd0) (0xc000a28820) Stream removed, broadcasting: 1\nI1116 09:55:10.838556 2157 log.go:181] (0xc000e1afd0) Go away received\nI1116 09:55:10.839039 2157 log.go:181] (0xc000e1afd0) (0xc000a28820) Stream removed, broadcasting: 1\nI1116 09:55:10.839060 2157 log.go:181] (0xc000e1afd0) (0xc0006a0280) Stream removed, broadcasting: 3\nI1116 09:55:10.839071 2157 log.go:181] (0xc000e1afd0) (0xc000a28000) Stream removed, broadcasting: 5\n" Nov 16 09:55:10.847: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:55:10.847: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:55:10.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:55:11.073: INFO: stderr: "I1116 09:55:10.985916 2176 log.go:181] (0xc000d834a0) (0xc0005fe820) Create stream\nI1116 09:55:10.985981 2176 log.go:181] (0xc000d834a0) (0xc0005fe820) Stream added, broadcasting: 1\nI1116 09:55:10.992431 2176 log.go:181] (0xc000d834a0) Reply frame received for 1\nI1116 09:55:10.992467 2176 log.go:181] (0xc000d834a0) (0xc0005fe000) Create stream\nI1116 09:55:10.992477 2176 log.go:181] (0xc000d834a0) (0xc0005fe000) Stream added, broadcasting: 3\nI1116 09:55:10.993398 2176 log.go:181] (0xc000d834a0) Reply frame received for 3\nI1116 09:55:10.993430 2176 log.go:181] (0xc000d834a0) (0xc0008d0a00) Create stream\nI1116 09:55:10.993440 2176 log.go:181] (0xc000d834a0) (0xc0008d0a00) Stream added, broadcasting: 5\nI1116 09:55:10.994236 2176 log.go:181] (0xc000d834a0) Reply frame received for 5\nI1116 09:55:11.064667 2176 log.go:181] (0xc000d834a0) Data frame received for 5\nI1116 09:55:11.064693 2176 log.go:181] (0xc0008d0a00) (5) Data frame handling\nI1116 09:55:11.064708 2176 log.go:181] (0xc0008d0a00) (5) Data frame sent\nI1116 09:55:11.064713 2176 log.go:181] (0xc000d834a0) Data frame received for 5\nI1116 09:55:11.064717 2176 log.go:181] (0xc0008d0a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1116 09:55:11.064743 2176 log.go:181] (0xc000d834a0) Data frame received for 3\nI1116 09:55:11.064754 2176 log.go:181] (0xc0005fe000) (3) Data frame handling\nI1116 09:55:11.064764 2176 log.go:181] (0xc0005fe000) (3) Data frame sent\nI1116 09:55:11.064775 2176 log.go:181] (0xc000d834a0) Data frame received for 3\nI1116 09:55:11.064783 2176 log.go:181] (0xc0005fe000) (3) Data frame handling\nI1116 09:55:11.066785 2176 log.go:181] (0xc000d834a0) Data frame received for 1\nI1116 09:55:11.066795 2176 log.go:181] (0xc0005fe820) (1) Data frame handling\nI1116 09:55:11.066801 2176 log.go:181] (0xc0005fe820) (1) Data frame sent\nI1116 09:55:11.066807 2176 log.go:181] (0xc000d834a0) (0xc0005fe820) Stream removed, broadcasting: 1\nI1116 09:55:11.067042 2176 log.go:181] (0xc000d834a0) Go away received\nI1116 09:55:11.067097 2176 log.go:181] (0xc000d834a0) (0xc0005fe820) Stream removed, broadcasting: 1\nI1116 09:55:11.067138 2176 log.go:181] (0xc000d834a0) (0xc0005fe000) Stream removed, broadcasting: 3\nI1116 09:55:11.067159 2176 log.go:181] (0xc000d834a0) (0xc0008d0a00) Stream removed, broadcasting: 5\n" Nov 16 09:55:11.074: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:55:11.074: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:55:11.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:55:11.294: INFO: stderr: "I1116 09:55:11.210964 2194 log.go:181] (0xc000e9b290) (0xc000e92960) Create stream\nI1116 09:55:11.211023 2194 log.go:181] (0xc000e9b290) (0xc000e92960) Stream added, broadcasting: 1\nI1116 09:55:11.216527 2194 log.go:181] (0xc000e9b290) Reply frame received for 1\nI1116 09:55:11.216567 2194 log.go:181] (0xc000e9b290) (0xc000e92000) Create stream\nI1116 09:55:11.216578 2194 log.go:181] (0xc000e9b290) (0xc000e92000) Stream added, broadcasting: 3\nI1116 09:55:11.217542 2194 log.go:181] (0xc000e9b290) Reply frame received for 3\nI1116 09:55:11.217597 2194 log.go:181] (0xc000e9b290) (0xc0003c4e60) Create stream\nI1116 09:55:11.217615 2194 log.go:181] (0xc000e9b290) (0xc0003c4e60) Stream added, broadcasting: 5\nI1116 09:55:11.218351 2194 log.go:181] (0xc000e9b290) Reply frame received for 5\nI1116 09:55:11.287989 2194 log.go:181] (0xc000e9b290) Data frame received for 5\nI1116 09:55:11.288025 2194 log.go:181] (0xc0003c4e60) (5) Data frame handling\nI1116 09:55:11.288043 2194 log.go:181] (0xc0003c4e60) (5) Data frame sent\nI1116 09:55:11.288076 2194 log.go:181] (0xc000e9b290) Data frame received for 5\nI1116 09:55:11.288088 2194 log.go:181] (0xc0003c4e60) (5) Data frame handling\nI1116 09:55:11.288096 2194 log.go:181] (0xc000e9b290) Data frame received for 3\nI1116 09:55:11.288100 2194 log.go:181] (0xc000e92000) (3) Data frame handling\nI1116 09:55:11.288105 2194 log.go:181] (0xc000e92000) (3) Data frame sent\nI1116 09:55:11.288109 2194 log.go:181] (0xc000e9b290) Data frame received for 3\nI1116 09:55:11.288112 2194 log.go:181] (0xc000e92000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1116 09:55:11.289808 2194 log.go:181] (0xc000e9b290) Data frame received for 1\nI1116 09:55:11.289849 2194 log.go:181] (0xc000e92960) (1) Data frame handling\nI1116 09:55:11.289881 2194 log.go:181] (0xc000e92960) (1) Data frame sent\nI1116 09:55:11.289902 2194 log.go:181] (0xc000e9b290) (0xc000e92960) Stream removed, broadcasting: 1\nI1116 09:55:11.289929 2194 log.go:181] (0xc000e9b290) Go away received\nI1116 09:55:11.290278 2194 log.go:181] (0xc000e9b290) (0xc000e92960) Stream removed, broadcasting: 1\nI1116 09:55:11.290294 2194 log.go:181] (0xc000e9b290) (0xc000e92000) Stream removed, broadcasting: 3\nI1116 09:55:11.290301 2194 log.go:181] (0xc000e9b290) (0xc0003c4e60) Stream removed, broadcasting: 5\n" Nov 16 09:55:11.294: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 16 09:55:11.294: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 16 09:55:11.299: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:55:11.299: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 16 09:55:11.299: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Nov 16 09:55:11.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:55:11.519: INFO: stderr: "I1116 09:55:11.432104 2212 log.go:181] (0xc000142370) (0xc000abc460) Create stream\nI1116 09:55:11.432174 2212 log.go:181] (0xc000142370) (0xc000abc460) Stream added, broadcasting: 1\nI1116 09:55:11.434114 2212 log.go:181] (0xc000142370) Reply frame received for 1\nI1116 09:55:11.434160 2212 log.go:181] (0xc000142370) (0xc000392280) Create stream\nI1116 09:55:11.434182 2212 log.go:181] (0xc000142370) (0xc000392280) Stream added, broadcasting: 3\nI1116 09:55:11.434969 2212 log.go:181] (0xc000142370) Reply frame received for 3\nI1116 09:55:11.435000 2212 log.go:181] (0xc000142370) (0xc000abc960) Create stream\nI1116 09:55:11.435011 2212 log.go:181] (0xc000142370) (0xc000abc960) Stream added, broadcasting: 5\nI1116 09:55:11.435854 2212 log.go:181] (0xc000142370) Reply frame received for 5\nI1116 09:55:11.509834 2212 log.go:181] (0xc000142370) Data frame received for 5\nI1116 09:55:11.509869 2212 log.go:181] (0xc000abc960) (5) Data frame handling\nI1116 09:55:11.509887 2212 log.go:181] (0xc000abc960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:55:11.509926 2212 log.go:181] (0xc000142370) Data frame received for 3\nI1116 09:55:11.509986 2212 log.go:181] (0xc000392280) (3) Data frame handling\nI1116 09:55:11.510022 2212 log.go:181] (0xc000392280) (3) Data frame sent\nI1116 09:55:11.510047 2212 log.go:181] (0xc000142370) Data frame received for 3\nI1116 09:55:11.510072 2212 log.go:181] (0xc000392280) (3) Data frame handling\nI1116 09:55:11.510108 2212 log.go:181] (0xc000142370) Data frame received for 5\nI1116 09:55:11.510131 2212 log.go:181] (0xc000abc960) (5) Data frame handling\nI1116 09:55:11.511332 2212 log.go:181] (0xc000142370) Data frame received for 1\nI1116 09:55:11.511373 2212 log.go:181] (0xc000abc460) (1) Data frame handling\nI1116 09:55:11.511404 2212 log.go:181] (0xc000abc460) (1) Data frame sent\nI1116 09:55:11.511427 2212 log.go:181] (0xc000142370) (0xc000abc460) Stream removed, broadcasting: 1\nI1116 09:55:11.511456 2212 log.go:181] (0xc000142370) Go away received\nI1116 09:55:11.511907 2212 log.go:181] (0xc000142370) (0xc000abc460) Stream removed, broadcasting: 1\nI1116 09:55:11.511933 2212 log.go:181] (0xc000142370) (0xc000392280) Stream removed, broadcasting: 3\nI1116 09:55:11.511946 2212 log.go:181] (0xc000142370) (0xc000abc960) Stream removed, broadcasting: 5\n" Nov 16 09:55:11.520: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:55:11.520: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:55:11.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:55:11.751: INFO: stderr: "I1116 09:55:11.651757 2230 log.go:181] (0xc0008614a0) (0xc00072c820) Create stream\nI1116 09:55:11.651822 2230 log.go:181] (0xc0008614a0) (0xc00072c820) Stream added, broadcasting: 1\nI1116 09:55:11.654576 2230 log.go:181] (0xc0008614a0) Reply frame received for 1\nI1116 09:55:11.654712 2230 log.go:181] (0xc0008614a0) (0xc00057a3c0) Create stream\nI1116 09:55:11.654738 2230 log.go:181] (0xc0008614a0) (0xc00057a3c0) Stream added, broadcasting: 3\nI1116 09:55:11.655899 2230 log.go:181] (0xc0008614a0) Reply frame received for 3\nI1116 09:55:11.656036 2230 log.go:181] (0xc0008614a0) (0xc0007ac140) Create stream\nI1116 09:55:11.656067 2230 log.go:181] (0xc0008614a0) (0xc0007ac140) Stream added, broadcasting: 5\nI1116 09:55:11.657193 2230 log.go:181] (0xc0008614a0) Reply frame received for 5\nI1116 09:55:11.716641 2230 log.go:181] (0xc0008614a0) Data frame received for 5\nI1116 09:55:11.716665 2230 log.go:181] (0xc0007ac140) (5) Data frame handling\nI1116 09:55:11.716680 2230 log.go:181] (0xc0007ac140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:55:11.741856 2230 log.go:181] (0xc0008614a0) Data frame received for 5\nI1116 09:55:11.741889 2230 log.go:181] (0xc0007ac140) (5) Data frame handling\nI1116 09:55:11.741911 2230 log.go:181] (0xc0008614a0) Data frame received for 3\nI1116 09:55:11.741928 2230 log.go:181] (0xc00057a3c0) (3) Data frame handling\nI1116 09:55:11.741950 2230 log.go:181] (0xc00057a3c0) (3) Data frame sent\nI1116 09:55:11.741965 2230 log.go:181] (0xc0008614a0) Data frame received for 3\nI1116 09:55:11.741973 2230 log.go:181] (0xc00057a3c0) (3) Data frame handling\nI1116 09:55:11.743503 2230 log.go:181] (0xc0008614a0) Data frame received for 1\nI1116 09:55:11.743517 2230 log.go:181] (0xc00072c820) (1) Data frame handling\nI1116 09:55:11.743525 2230 log.go:181] (0xc00072c820) (1) Data frame sent\nI1116 09:55:11.743535 2230 log.go:181] (0xc0008614a0) (0xc00072c820) Stream removed, broadcasting: 1\nI1116 09:55:11.743547 2230 log.go:181] (0xc0008614a0) Go away received\nI1116 09:55:11.743890 2230 log.go:181] (0xc0008614a0) (0xc00072c820) Stream removed, broadcasting: 1\nI1116 09:55:11.743911 2230 log.go:181] (0xc0008614a0) (0xc00057a3c0) Stream removed, broadcasting: 3\nI1116 09:55:11.743922 2230 log.go:181] (0xc0008614a0) (0xc0007ac140) Stream removed, broadcasting: 5\n" Nov 16 09:55:11.751: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:55:11.751: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:55:11.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 16 09:55:12.005: INFO: stderr: "I1116 09:55:11.884981 2248 log.go:181] (0xc000e1f130) (0xc000e9a820) Create stream\nI1116 09:55:11.885055 2248 log.go:181] (0xc000e1f130) (0xc000e9a820) Stream added, broadcasting: 1\nI1116 09:55:11.889632 2248 log.go:181] (0xc000e1f130) Reply frame received for 1\nI1116 09:55:11.889666 2248 log.go:181] (0xc000e1f130) (0xc0007190e0) Create stream\nI1116 09:55:11.889675 2248 log.go:181] (0xc000e1f130) (0xc0007190e0) Stream added, broadcasting: 3\nI1116 09:55:11.890532 2248 log.go:181] (0xc000e1f130) Reply frame received for 3\nI1116 09:55:11.890559 2248 log.go:181] (0xc000e1f130) (0xc000d0e000) Create stream\nI1116 09:55:11.890566 2248 log.go:181] (0xc000e1f130) (0xc000d0e000) Stream added, broadcasting: 5\nI1116 09:55:11.891283 2248 log.go:181] (0xc000e1f130) Reply frame received for 5\nI1116 09:55:11.952057 2248 log.go:181] (0xc000e1f130) Data frame received for 5\nI1116 09:55:11.952093 2248 log.go:181] (0xc000d0e000) (5) Data frame handling\nI1116 09:55:11.952120 2248 log.go:181] (0xc000d0e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1116 09:55:11.995341 2248 log.go:181] (0xc000e1f130) Data frame received for 3\nI1116 09:55:11.995368 2248 log.go:181] (0xc0007190e0) (3) Data frame handling\nI1116 09:55:11.995384 2248 log.go:181] (0xc0007190e0) (3) Data frame sent\nI1116 09:55:11.995724 2248 log.go:181] (0xc000e1f130) Data frame received for 5\nI1116 09:55:11.995757 2248 log.go:181] (0xc000d0e000) (5) Data frame handling\nI1116 09:55:11.995781 2248 log.go:181] (0xc000e1f130) Data frame received for 3\nI1116 09:55:11.995793 2248 log.go:181] (0xc0007190e0) (3) Data frame handling\nI1116 09:55:11.997801 2248 log.go:181] (0xc000e1f130) Data frame received for 1\nI1116 09:55:11.997835 2248 log.go:181] (0xc000e9a820) (1) Data frame handling\nI1116 09:55:11.997871 2248 log.go:181] (0xc000e9a820) (1) Data frame sent\nI1116 09:55:11.997902 2248 log.go:181] (0xc000e1f130) (0xc000e9a820) Stream removed, broadcasting: 1\nI1116 09:55:11.997943 2248 log.go:181] (0xc000e1f130) Go away received\nI1116 09:55:11.998338 2248 log.go:181] (0xc000e1f130) (0xc000e9a820) Stream removed, broadcasting: 1\nI1116 09:55:11.998354 2248 log.go:181] (0xc000e1f130) (0xc0007190e0) Stream removed, broadcasting: 3\nI1116 09:55:11.998362 2248 log.go:181] (0xc000e1f130) (0xc000d0e000) Stream removed, broadcasting: 5\n" Nov 16 09:55:12.005: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 16 09:55:12.005: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 16 09:55:12.005: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 09:55:12.021: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Nov 16 09:55:22.030: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 16 09:55:22.030: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 16 09:55:22.030: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 16 09:55:22.044: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:22.044: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:22.044: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:22.044: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:22.044: INFO: Nov 16 09:55:22.044: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 16 09:55:23.202: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:23.203: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:23.203: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:23.203: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:23.203: INFO: Nov 16 09:55:23.203: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 16 09:55:25.053: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:25.053: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:25.053: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:25.053: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:25.053: INFO: Nov 16 09:55:25.053: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 16 09:55:26.059: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:26.059: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:26.059: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:26.059: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:26.059: INFO: Nov 16 09:55:26.059: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 16 09:55:27.065: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:27.065: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:27.065: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:27.065: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:27.065: INFO: Nov 16 09:55:27.065: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 16 09:55:28.070: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:28.070: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:28.071: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:28.071: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:28.071: INFO: Nov 16 09:55:28.071: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 16 09:55:29.076: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:29.076: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:29.076: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:29.076: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:29.076: INFO: Nov 16 09:55:29.076: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 16 09:55:30.081: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:30.081: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:30.081: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:30.081: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:30.081: INFO: Nov 16 09:55:30.081: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 16 09:55:31.086: INFO: POD NODE PHASE GRACE CONDITIONS Nov 16 09:55:31.086: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:54:40 +0000 UTC }] Nov 16 09:55:31.087: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:31.087: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-16 09:55:00 +0000 UTC }] Nov 16 09:55:31.087: INFO: Nov 16 09:55:31.087: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6393 Nov 16 09:55:32.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:55:32.235: INFO: rc: 1 Nov 16 09:55:32.235: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 16 09:55:42.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:55:42.337: INFO: rc: 1 Nov 16 09:55:42.337: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:55:52.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:55:52.436: INFO: rc: 1 Nov 16 09:55:52.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:56:02.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:56:02.539: INFO: rc: 1 Nov 16 09:56:02.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:56:12.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:56:12.650: INFO: rc: 1 Nov 16 09:56:12.650: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:56:22.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:56:22.765: INFO: rc: 1 Nov 16 09:56:22.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:56:32.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:56:32.867: INFO: rc: 1 Nov 16 09:56:32.867: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:56:42.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:56:42.971: INFO: rc: 1 Nov 16 09:56:42.971: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:56:52.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:56:53.073: INFO: rc: 1 Nov 16 09:56:53.073: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:57:03.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:57:03.180: INFO: rc: 1 Nov 16 09:57:03.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:57:13.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:57:13.287: INFO: rc: 1 Nov 16 09:57:13.287: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:57:23.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:57:23.396: INFO: rc: 1 Nov 16 09:57:23.396: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:57:33.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:57:33.496: INFO: rc: 1 Nov 16 09:57:33.496: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:57:43.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:57:46.481: INFO: rc: 1 Nov 16 09:57:46.481: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:57:56.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:57:56.587: INFO: rc: 1 Nov 16 09:57:56.587: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:58:06.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:58:06.709: INFO: rc: 1 Nov 16 09:58:06.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:58:16.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:58:16.811: INFO: rc: 1 Nov 16 09:58:16.811: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:58:26.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:58:26.917: INFO: rc: 1 Nov 16 09:58:26.917: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:58:36.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:58:37.862: INFO: rc: 1 Nov 16 09:58:37.862: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:58:47.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:58:47.978: INFO: rc: 1 Nov 16 09:58:47.978: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:58:57.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:58:58.094: INFO: rc: 1 Nov 16 09:58:58.094: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:59:08.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:59:08.222: INFO: rc: 1 Nov 16 09:59:08.222: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:59:18.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:59:18.326: INFO: rc: 1 Nov 16 09:59:18.327: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:59:28.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:59:28.440: INFO: rc: 1 Nov 16 09:59:28.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:59:38.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:59:38.553: INFO: rc: 1 Nov 16 09:59:38.553: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:59:48.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:59:48.664: INFO: rc: 1 Nov 16 09:59:48.664: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 09:59:58.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 09:59:58.788: INFO: rc: 1 Nov 16 09:59:58.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 10:00:08.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 10:00:08.907: INFO: rc: 1 Nov 16 10:00:08.908: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 10:00:18.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 10:00:19.013: INFO: rc: 1 Nov 16 10:00:19.013: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 10:00:29.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 10:00:29.123: INFO: rc: 1 Nov 16 10:00:29.123: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Nov 16 10:00:39.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6393 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 16 10:00:39.231: INFO: rc: 1 Nov 16 10:00:39.231: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Nov 16 10:00:39.231: INFO: Scaling statefulset ss to 0 Nov 16 10:00:39.243: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 16 10:00:39.245: INFO: Deleting all statefulset in ns statefulset-6393 Nov 16 10:00:39.248: INFO: Scaling statefulset ss to 0 Nov 16 10:00:39.254: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 10:00:39.256: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:00:39.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6393" for this suite. • [SLOW TEST:360.223 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":161,"skipped":2898,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:00:39.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:01:10.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6037" for this suite. STEP: Destroying namespace "nsdeletetest-2249" for this suite. Nov 16 10:01:10.578: INFO: Namespace nsdeletetest-2249 was already deleted STEP: Destroying namespace "nsdeletetest-1742" for this suite. • [SLOW TEST:31.257 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":162,"skipped":2903,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:01:10.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:01:10.624: INFO: Creating deployment "webserver-deployment" Nov 16 10:01:10.634: INFO: Waiting for observed generation 1 Nov 16 10:01:13.003: INFO: Waiting for all required pods to come up Nov 16 10:01:13.008: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Nov 16 10:01:23.206: INFO: Waiting for deployment "webserver-deployment" to complete Nov 16 10:01:23.212: INFO: Updating deployment "webserver-deployment" with a non-existent image Nov 16 10:01:23.221: INFO: Updating deployment webserver-deployment Nov 16 10:01:23.221: INFO: Waiting for observed generation 2 Nov 16 10:01:25.237: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Nov 16 10:01:25.240: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Nov 16 10:01:25.242: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Nov 16 10:01:25.250: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Nov 16 10:01:25.250: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Nov 16 10:01:25.253: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Nov 16 10:01:25.257: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Nov 16 10:01:25.257: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Nov 16 10:01:25.262: INFO: Updating deployment webserver-deployment Nov 16 10:01:25.262: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Nov 16 10:01:25.384: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Nov 16 10:01:25.421: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 16 10:01:26.093: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2109 /apis/apps/v1/namespaces/deployment-2109/deployments/webserver-deployment 04650b80-7c5d-4028-813d-f05ed1a1936d 9787577 3 2020-11-16 10:01:10 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e4fbb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-11-16 10:01:24 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-11-16 10:01:25 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Nov 16 10:01:26.258: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-2109 /apis/apps/v1/namespaces/deployment-2109/replicasets/webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 9787621 3 2020-11-16 10:01:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 04650b80-7c5d-4028-813d-f05ed1a1936d 0xc00527a977 0xc00527a978}] [] [{kube-controller-manager Update apps/v1 2020-11-16 10:01:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04650b80-7c5d-4028-813d-f05ed1a1936d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00527a9f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:01:26.258: INFO: All old ReplicaSets of Deployment "webserver-deployment": Nov 16 10:01:26.258: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-2109 /apis/apps/v1/namespaces/deployment-2109/replicasets/webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 9787622 3 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 04650b80-7c5d-4028-813d-f05ed1a1936d 0xc00527aa57 0xc00527aa58}] [] [{kube-controller-manager Update apps/v1 2020-11-16 10:01:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04650b80-7c5d-4028-813d-f05ed1a1936d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00527aac8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:01:26.420: INFO: Pod "webserver-deployment-795d758f88-5ssf9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5ssf9 webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-5ssf9 01fc613e-7a3f-4dbc-a230-ee0b5e61e2dc 9787646 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20087 0xc003a20088}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-11-16 10:01:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.420: INFO: Pod "webserver-deployment-795d758f88-5zb8m" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5zb8m webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-5zb8m 3b1418ee-f2ea-4b24-93fd-188d1cd5aabd 9787590 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20237 0xc003a20238}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.420: INFO: Pod "webserver-deployment-795d758f88-86nsf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-86nsf webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-86nsf 47073019-22f0-4ef1-b15d-4be684e74bf9 9787562 0 2020-11-16 10:01:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20397 0xc003a20398}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-11-16 10:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.420: INFO: Pod "webserver-deployment-795d758f88-94mpp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-94mpp webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-94mpp 6dbd5cbc-0841-48c2-a505-5408a9bd6d3c 9787557 0 2020-11-16 10:01:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20547 0xc003a20548}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-11-16 10:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.421: INFO: Pod "webserver-deployment-795d758f88-cdblc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cdblc webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-cdblc bcd1a8f5-e9b0-4423-8aaa-f18233baa6e4 9787620 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20757 0xc003a20758}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.421: INFO: Pod "webserver-deployment-795d758f88-hxtn8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hxtn8 webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-hxtn8 067026a2-a92e-4023-98d0-6bacb44472ea 9787619 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20977 0xc003a20978}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.421: INFO: Pod "webserver-deployment-795d758f88-kwsz5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kwsz5 webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-kwsz5 b1a2236e-567b-48f1-8338-41008bf1eb69 9787594 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20ab7 0xc003a20ab8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.421: INFO: Pod "webserver-deployment-795d758f88-mcc4d" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mcc4d webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-mcc4d dbcf322b-29b5-4a66-97fc-0eb0aa2fdb9a 9787544 0 2020-11-16 10:01:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20bf7 0xc003a20bf8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-11-16 10:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.421: INFO: Pod "webserver-deployment-795d758f88-mj7hd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mj7hd webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-mj7hd 8711bce8-3404-4aab-b759-0e1d07bd94ce 9787564 0 2020-11-16 10:01:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20da7 0xc003a20da8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-11-16 10:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.422: INFO: Pod "webserver-deployment-795d758f88-msstd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-msstd webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-msstd d70a112b-9430-47e8-8040-e734839f1b28 9787617 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a20f67 0xc003a20f68}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.422: INFO: Pod "webserver-deployment-795d758f88-rxt2q" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rxt2q webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-rxt2q 8f0b2c0f-5322-48e2-b75f-f178b67ba644 9787618 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a213f7 0xc003a213f8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.422: INFO: Pod "webserver-deployment-795d758f88-t225n" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t225n webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-t225n 1c46dcaf-86b8-4958-8690-91a8070b0b87 9787623 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a21737 0xc003a21738}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.422: INFO: Pod "webserver-deployment-795d758f88-t8tzk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t8tzk webserver-deployment-795d758f88- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-795d758f88-t8tzk d5902e49-ac24-4314-abc2-f9b187604c37 9787543 0 2020-11-16 10:01:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 895f2702-847b-4426-afbb-c24afb5b4fb0 0xc003a21877 0xc003a21878}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"895f2702-847b-4426-afbb-c24afb5b4fb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-11-16 10:01:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.423: INFO: Pod "webserver-deployment-dd94f59b7-4bnt7" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4bnt7 webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-4bnt7 7f3d4585-f59e-4d46-aa12-9b4b8b8f7d64 9787465 0 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003a21a27 0xc003a21a28}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.241\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.241,StartTime:2020-11-16 10:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a5a45112d93f0d0935e7e4ed921df6d165f5a8137baca24ec0ee3ff911945ac1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.423: INFO: Pod "webserver-deployment-dd94f59b7-65f4s" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-65f4s webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-65f4s ff3d43ba-fc85-4e46-bacc-2b66a25629f4 9787596 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003a21bd7 0xc003a21bd8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.423: INFO: Pod "webserver-deployment-dd94f59b7-66b8r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-66b8r webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-66b8r 70fb18eb-5a98-41e6-bf87-fd6168316db1 9787592 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003a21d07 0xc003a21d08}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.423: INFO: Pod "webserver-deployment-dd94f59b7-6wvc6" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6wvc6 webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-6wvc6 4ba68f0a-c611-4427-a292-d032d26aeb6e 9787500 0 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003a21e37 0xc003a21e38}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.244\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.244,StartTime:2020-11-16 10:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf17fba55d9a5e89d705380da09e7a06d8b249d07e4e9ee9311c72e37d0547b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.424: INFO: Pod "webserver-deployment-dd94f59b7-7wlq4" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7wlq4 webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-7wlq4 fcec6bbd-ddb8-415f-aaff-136ce0795f8f 9787615 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003a21fe7 0xc003a21fe8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.424: INFO: Pod "webserver-deployment-dd94f59b7-96jzr" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-96jzr webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-96jzr 0c85c804-1802-43b4-a726-1dbc253a0553 9787503 0 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c20117 0xc003c20118}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.245\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.245,StartTime:2020-11-16 10:01:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fb760c246472e4de36de94c1206df606190ff0e59a71587bc3c22e6e32b83e62,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.424: INFO: Pod "webserver-deployment-dd94f59b7-bqdvw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bqdvw webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-bqdvw 5a06f638-64d7-4d25-8cf4-99625cf7d15d 9787464 0 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c202c7 0xc003c202c8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.129\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.129,StartTime:2020-11-16 10:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0e79d2c9309bb7d8e52fb888778e94fc21a6c0cb8b126ec6b405fa954f9c95e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.424: INFO: Pod "webserver-deployment-dd94f59b7-dhtkr" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dhtkr webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-dhtkr 33d63658-cce0-44ce-b4b7-3c4bf0d8c81d 9787626 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c20477 0xc003c20478}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.425: INFO: Pod "webserver-deployment-dd94f59b7-hhgrb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hhgrb webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-hhgrb d8e22845-12a3-4000-a186-404e6134cc3d 9787430 0 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c205b7 0xc003c205b8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.127\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.127,StartTime:2020-11-16 10:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2f7049639f2189d00c2e8aee79e3e65a189ddbf1be6f112154c2c5577e9ce29e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.425: INFO: Pod "webserver-deployment-dd94f59b7-jlsg4" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jlsg4 webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-jlsg4 fc7b96e9-f9ef-4166-bbaa-862a9a4b8ee9 9787629 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c20767 0xc003c20768}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.425: INFO: Pod "webserver-deployment-dd94f59b7-jv5pj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jv5pj webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-jv5pj 1c161e42-cead-454f-94cc-ebef68069d93 9787584 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c208a7 0xc003c208a8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.425: INFO: Pod "webserver-deployment-dd94f59b7-mhc5k" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mhc5k webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-mhc5k 338ea12f-d71a-4cf1-8b1a-9180a5e16787 9787601 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c209d7 0xc003c209d8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.426: INFO: Pod "webserver-deployment-dd94f59b7-mr89t" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mr89t webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-mr89t 6ada6cc2-a6f7-42c0-95ad-bd2a386658b3 9787624 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c20b07 0xc003c20b08}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.426: INFO: Pod "webserver-deployment-dd94f59b7-nhv79" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nhv79 webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-nhv79 81f57341-690d-4c7f-a382-6542325abdc2 9787628 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c20c37 0xc003c20c38}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.426: INFO: Pod "webserver-deployment-dd94f59b7-pp8nw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pp8nw webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-pp8nw 3590a423-33af-44a5-958a-a72ccd8f4529 9787616 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c20d67 0xc003c20d68}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.426: INFO: Pod "webserver-deployment-dd94f59b7-pqww9" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pqww9 webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-pqww9 ebc8b007-74c1-4356-aaa5-48efe51c7286 9787483 0 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c20e97 0xc003c20e98}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.243\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.243,StartTime:2020-11-16 10:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://abbfabe16304f160190e19dc05d3dafe2f68fd4b838f76098e81a1414b7a39ff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.426: INFO: Pod "webserver-deployment-dd94f59b7-qmmkr" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qmmkr webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-qmmkr d74a6bdb-af7b-419f-9bf4-2bc326b4b7b2 9787456 0 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c21047 0xc003c21048}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.128\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.128,StartTime:2020-11-16 10:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f8c8b07c83470fdd931177da014fb6b5e9ba6f68769b9ecba1279da079f057de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.128,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.427: INFO: Pod "webserver-deployment-dd94f59b7-qt5jw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qt5jw webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-qt5jw de288901-7b31-4e6a-af91-973439c8d2d4 9787613 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c211f7 0xc003c211f8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.427: INFO: Pod "webserver-deployment-dd94f59b7-vcd4f" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vcd4f webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-vcd4f 72544d68-729d-430a-b7c2-081a70c858c7 9787455 0 2020-11-16 10:01:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c21327 0xc003c21328}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:01:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.242\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.242,StartTime:2020-11-16 10:01:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://915b33a750c77e2be07aa5f92dc1154b0d43213b107c80e6d398fb7413829c8a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:01:26.427: INFO: Pod "webserver-deployment-dd94f59b7-wlg9z" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wlg9z webserver-deployment-dd94f59b7- deployment-2109 /api/v1/namespaces/deployment-2109/pods/webserver-deployment-dd94f59b7-wlg9z 2fd1ef88-fb9b-447b-b9cc-1a4aa07f7d59 9787625 0 2020-11-16 10:01:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 1d68ac61-8cc2-4b80-be4f-389edb57b70c 0xc003c214d7 0xc003c214d8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d68ac61-8cc2-4b80-be4f-389edb57b70c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nf8d4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nf8d4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nf8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:01:26.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2109" for this suite. • [SLOW TEST:16.138 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":163,"skipped":2924,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:01:26.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Nov 16 10:01:46.454: INFO: Successfully updated pod "annotationupdate53dde06c-a193-4421-ab05-aabbdb58f34e" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:01:48.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6138" for this suite. • [SLOW TEST:21.799 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2942,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:01:48.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 10:01:49.914: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 10:01:51.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117709, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117709, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117710, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117709, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:01:53.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117709, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117709, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117710, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117709, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 10:01:56.980: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:01:57.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4223" for this suite. STEP: Destroying namespace "webhook-4223-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.726 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":165,"skipped":2961,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:01:57.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:01:57.425: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Nov 16 10:02:02.431: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 16 10:02:02.431: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 16 10:02:02.530: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-637 /apis/apps/v1/namespaces/deployment-637/deployments/test-cleanup-deployment 61b3aa8e-db13-4084-a8b3-238d6d978dd2 9788127 1 2020-11-16 10:02:02 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-11-16 10:02:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055461a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Nov 16 10:02:02.541: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-637 /apis/apps/v1/namespaces/deployment-637/replicasets/test-cleanup-deployment-5d446bdd47 acce9cdc-1409-4a74-86cd-c115f03ea63c 9788130 1 2020-11-16 10:02:02 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 61b3aa8e-db13-4084-a8b3-238d6d978dd2 0xc0056315a7 0xc0056315a8}] [] [{kube-controller-manager Update apps/v1 2020-11-16 10:02:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61b3aa8e-db13-4084-a8b3-238d6d978dd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005631648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:02:02.541: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Nov 16 10:02:02.542: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-637 /apis/apps/v1/namespaces/deployment-637/replicasets/test-cleanup-controller fed2718c-84e0-409d-bdae-0f42a8f1cf64 9788129 1 2020-11-16 10:01:57 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 61b3aa8e-db13-4084-a8b3-238d6d978dd2 0xc005631497 0xc005631498}] [] [{e2e.test Update apps/v1 2020-11-16 10:01:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-16 10:02:02 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"61b3aa8e-db13-4084-a8b3-238d6d978dd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005631538 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:02:02.560: INFO: Pod "test-cleanup-controller-zmkz8" is available: &Pod{ObjectMeta:{test-cleanup-controller-zmkz8 test-cleanup-controller- deployment-637 /api/v1/namespaces/deployment-637/pods/test-cleanup-controller-zmkz8 1d39605b-2fab-4a47-bddb-1beb860d6062 9788103 0 2020-11-16 10:01:57 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller fed2718c-84e0-409d-bdae-0f42a8f1cf64 0xc005631ae7 0xc005631ae8}] [] [{kube-controller-manager Update v1 2020-11-16 10:01:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed2718c-84e0-409d-bdae-0f42a8f1cf64\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:02:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.148\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz7gz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz7gz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz7gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:02:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:02:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:01:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.148,StartTime:2020-11-16 10:01:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:01:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://457e0b2b7a1b108cd5c17d81ca4790988d7bc496146523b5a48b4e222d14b352,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.148,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 16 10:02:02.561: INFO: Pod "test-cleanup-deployment-5d446bdd47-7x8zq" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-7x8zq test-cleanup-deployment-5d446bdd47- deployment-637 /api/v1/namespaces/deployment-637/pods/test-cleanup-deployment-5d446bdd47-7x8zq 50a72f0f-fd89-4690-8d0d-74c74a6253bf 9788136 0 2020-11-16 10:02:02 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 acce9cdc-1409-4a74-86cd-c115f03ea63c 0xc005631ca7 0xc005631ca8}] [] [{kube-controller-manager Update v1 2020-11-16 10:02:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"acce9cdc-1409-4a74-86cd-c115f03ea63c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tz7gz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tz7gz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tz7gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:02:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:02:02.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-637" for this suite. • [SLOW TEST:5.426 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":166,"skipped":2983,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:02:02.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Nov 16 10:02:02.809: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Nov 16 10:02:02.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2186' Nov 16 10:02:03.200: INFO: stderr: "" Nov 16 10:02:03.200: INFO: stdout: "service/agnhost-replica created\n" Nov 16 10:02:03.200: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Nov 16 10:02:03.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2186' Nov 16 10:02:03.516: INFO: stderr: "" Nov 16 10:02:03.516: INFO: stdout: "service/agnhost-primary created\n" Nov 16 10:02:03.516: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Nov 16 10:02:03.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2186' Nov 16 10:02:03.921: INFO: stderr: "" Nov 16 10:02:03.921: INFO: stdout: "service/frontend created\n" Nov 16 10:02:03.922: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Nov 16 10:02:03.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2186' Nov 16 10:02:04.541: INFO: stderr: "" Nov 16 10:02:04.541: INFO: stdout: "deployment.apps/frontend created\n" Nov 16 10:02:04.541: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 16 10:02:04.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2186' Nov 16 10:02:05.156: INFO: stderr: "" Nov 16 10:02:05.156: INFO: stdout: "deployment.apps/agnhost-primary created\n" Nov 16 10:02:05.156: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 16 10:02:05.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2186' Nov 16 10:02:05.660: INFO: stderr: "" Nov 16 10:02:05.660: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Nov 16 10:02:05.660: INFO: Waiting for all frontend pods to be Running. Nov 16 10:02:15.710: INFO: Waiting for frontend to serve content. Nov 16 10:02:15.722: INFO: Trying to add a new entry to the guestbook. Nov 16 10:02:15.732: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Nov 16 10:02:15.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2186' Nov 16 10:02:15.916: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 10:02:15.916: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Nov 16 10:02:15.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2186' Nov 16 10:02:16.097: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 10:02:16.097: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Nov 16 10:02:16.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2186' Nov 16 10:02:16.277: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 10:02:16.277: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 16 10:02:16.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2186' Nov 16 10:02:16.397: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 10:02:16.397: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 16 10:02:16.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2186' Nov 16 10:02:16.771: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 10:02:16.771: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Nov 16 10:02:16.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2186' Nov 16 10:02:17.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 10:02:17.356: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:02:17.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2186" for this suite. • [SLOW TEST:14.901 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":167,"skipped":2989,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:02:17.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:02:18.239: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a" in namespace "projected-5699" to be "Succeeded or Failed" Nov 16 10:02:18.782: INFO: Pod "downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a": Phase="Pending", Reason="", readiness=false. Elapsed: 543.348802ms Nov 16 10:02:20.866: INFO: Pod "downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627204178s Nov 16 10:02:22.895: INFO: Pod "downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a": Phase="Running", Reason="", readiness=true. Elapsed: 4.656387192s Nov 16 10:02:24.900: INFO: Pod "downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.661696126s STEP: Saw pod success Nov 16 10:02:24.901: INFO: Pod "downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a" satisfied condition "Succeeded or Failed" Nov 16 10:02:24.904: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a container client-container: STEP: delete the pod Nov 16 10:02:24.997: INFO: Waiting for pod downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a to disappear Nov 16 10:02:25.000: INFO: Pod downwardapi-volume-1d337425-4c23-4cf0-ae1c-d62c1942ac8a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:02:25.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5699" for this suite. • [SLOW TEST:7.435 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2993,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:02:25.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:02:41.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1169" for this suite. • [SLOW TEST:16.245 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":169,"skipped":3018,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:02:41.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-3fb65eb6-093b-4503-8af5-73ca5ffdd90d STEP: Creating a pod to test consume secrets Nov 16 10:02:41.635: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b84b261-7711-4fce-b3c6-3397fc2e885d" in namespace "projected-1004" to be "Succeeded or Failed" Nov 16 10:02:41.699: INFO: Pod "pod-projected-secrets-9b84b261-7711-4fce-b3c6-3397fc2e885d": Phase="Pending", Reason="", readiness=false. Elapsed: 63.331767ms Nov 16 10:02:43.703: INFO: Pod "pod-projected-secrets-9b84b261-7711-4fce-b3c6-3397fc2e885d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067233714s Nov 16 10:02:45.716: INFO: Pod "pod-projected-secrets-9b84b261-7711-4fce-b3c6-3397fc2e885d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080485959s STEP: Saw pod success Nov 16 10:02:45.716: INFO: Pod "pod-projected-secrets-9b84b261-7711-4fce-b3c6-3397fc2e885d" satisfied condition "Succeeded or Failed" Nov 16 10:02:45.719: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-9b84b261-7711-4fce-b3c6-3397fc2e885d container projected-secret-volume-test: STEP: delete the pod Nov 16 10:02:45.757: INFO: Waiting for pod pod-projected-secrets-9b84b261-7711-4fce-b3c6-3397fc2e885d to disappear Nov 16 10:02:45.770: INFO: Pod pod-projected-secrets-9b84b261-7711-4fce-b3c6-3397fc2e885d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:02:45.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1004" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":170,"skipped":3020,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:02:45.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-8772907e-f47a-4f1a-a2f8-825c706f4fa5 STEP: Creating a pod to test consume secrets Nov 16 10:02:46.144: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7afd5734-5d55-408c-915c-23cfacfdc42e" in namespace "projected-6134" to be "Succeeded or Failed" Nov 16 10:02:46.195: INFO: Pod "pod-projected-secrets-7afd5734-5d55-408c-915c-23cfacfdc42e": Phase="Pending", Reason="", readiness=false. Elapsed: 51.388348ms Nov 16 10:02:48.200: INFO: Pod "pod-projected-secrets-7afd5734-5d55-408c-915c-23cfacfdc42e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056385117s Nov 16 10:02:50.205: INFO: Pod "pod-projected-secrets-7afd5734-5d55-408c-915c-23cfacfdc42e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061176027s STEP: Saw pod success Nov 16 10:02:50.205: INFO: Pod "pod-projected-secrets-7afd5734-5d55-408c-915c-23cfacfdc42e" satisfied condition "Succeeded or Failed" Nov 16 10:02:50.209: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7afd5734-5d55-408c-915c-23cfacfdc42e container projected-secret-volume-test: STEP: delete the pod Nov 16 10:02:50.231: INFO: Waiting for pod pod-projected-secrets-7afd5734-5d55-408c-915c-23cfacfdc42e to disappear Nov 16 10:02:50.247: INFO: Pod pod-projected-secrets-7afd5734-5d55-408c-915c-23cfacfdc42e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:02:50.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6134" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":3038,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:02:50.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-4ckm STEP: Creating a pod to test atomic-volume-subpath Nov 16 10:02:50.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4ckm" in namespace "subpath-8298" to be "Succeeded or Failed" Nov 16 10:02:50.484: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Pending", Reason="", readiness=false. Elapsed: 34.63918ms Nov 16 10:02:52.615: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166058781s Nov 16 10:02:54.621: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 4.171103081s Nov 16 10:02:56.627: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 6.177503624s Nov 16 10:02:58.632: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 8.183083863s Nov 16 10:03:00.637: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 10.187760615s Nov 16 10:03:02.640: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 12.191022099s Nov 16 10:03:04.650: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 14.200379314s Nov 16 10:03:06.656: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 16.206883926s Nov 16 10:03:08.662: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 18.212663543s Nov 16 10:03:10.668: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 20.218330449s Nov 16 10:03:12.671: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Running", Reason="", readiness=true. Elapsed: 22.221796997s Nov 16 10:03:14.687: INFO: Pod "pod-subpath-test-configmap-4ckm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.237492802s STEP: Saw pod success Nov 16 10:03:14.687: INFO: Pod "pod-subpath-test-configmap-4ckm" satisfied condition "Succeeded or Failed" Nov 16 10:03:14.690: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-4ckm container test-container-subpath-configmap-4ckm: STEP: delete the pod Nov 16 10:03:14.724: INFO: Waiting for pod pod-subpath-test-configmap-4ckm to disappear Nov 16 10:03:14.764: INFO: Pod pod-subpath-test-configmap-4ckm no longer exists STEP: Deleting pod pod-subpath-test-configmap-4ckm Nov 16 10:03:14.765: INFO: Deleting pod "pod-subpath-test-configmap-4ckm" in namespace "subpath-8298" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:03:14.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8298" for this suite. • [SLOW TEST:24.522 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":172,"skipped":3038,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:03:14.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:163 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:03:15.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9940" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":173,"skipped":3045,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:03:15.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:03:15.292: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-fe89793e-1060-42f5-95f8-b0a9da1377fb" in namespace "security-context-test-7592" to be "Succeeded or Failed" Nov 16 10:03:15.311: INFO: Pod "alpine-nnp-false-fe89793e-1060-42f5-95f8-b0a9da1377fb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.316131ms Nov 16 10:03:17.771: INFO: Pod "alpine-nnp-false-fe89793e-1060-42f5-95f8-b0a9da1377fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478747798s Nov 16 10:03:19.775: INFO: Pod "alpine-nnp-false-fe89793e-1060-42f5-95f8-b0a9da1377fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.482235138s Nov 16 10:03:19.775: INFO: Pod "alpine-nnp-false-fe89793e-1060-42f5-95f8-b0a9da1377fb" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:03:19.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7592" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":3057,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:03:19.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:03:26.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-813" for this suite. STEP: Destroying namespace "nsdeletetest-3952" for this suite. Nov 16 10:03:26.119: INFO: Namespace nsdeletetest-3952 was already deleted STEP: Destroying namespace "nsdeletetest-8116" for this suite. • [SLOW TEST:6.305 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":175,"skipped":3059,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:03:26.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 16 10:03:26.232: INFO: Waiting up to 5m0s for pod "pod-6b70f2ea-870a-413d-ab70-cc80f9f1b7ce" in namespace "emptydir-175" to be "Succeeded or Failed" Nov 16 10:03:26.339: INFO: Pod "pod-6b70f2ea-870a-413d-ab70-cc80f9f1b7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 106.873852ms Nov 16 10:03:28.344: INFO: Pod "pod-6b70f2ea-870a-413d-ab70-cc80f9f1b7ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111777409s Nov 16 10:03:30.349: INFO: Pod "pod-6b70f2ea-870a-413d-ab70-cc80f9f1b7ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116543112s STEP: Saw pod success Nov 16 10:03:30.349: INFO: Pod "pod-6b70f2ea-870a-413d-ab70-cc80f9f1b7ce" satisfied condition "Succeeded or Failed" Nov 16 10:03:30.351: INFO: Trying to get logs from node latest-worker pod pod-6b70f2ea-870a-413d-ab70-cc80f9f1b7ce container test-container: STEP: delete the pod Nov 16 10:03:30.405: INFO: Waiting for pod pod-6b70f2ea-870a-413d-ab70-cc80f9f1b7ce to disappear Nov 16 10:03:30.419: INFO: Pod pod-6b70f2ea-870a-413d-ab70-cc80f9f1b7ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:03:30.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-175" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":3067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:03:30.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:03:30.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":177,"skipped":3107,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:03:30.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Nov 16 10:03:34.786: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3410 PodName:var-expansion-4b835387-acfd-4691-8038-b2dff26b33db ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 10:03:34.786: INFO: >>> kubeConfig: /root/.kube/config I1116 10:03:34.819392 7 log.go:181] (0xc0068d06e0) (0xc001296320) Create stream I1116 10:03:34.819429 7 log.go:181] (0xc0068d06e0) (0xc001296320) Stream added, broadcasting: 1 I1116 10:03:34.821650 7 log.go:181] (0xc0068d06e0) Reply frame received for 1 I1116 10:03:34.821686 7 log.go:181] (0xc0068d06e0) (0xc005074be0) Create stream I1116 10:03:34.821698 7 log.go:181] (0xc0068d06e0) (0xc005074be0) Stream added, broadcasting: 3 I1116 10:03:34.822548 7 log.go:181] (0xc0068d06e0) Reply frame received for 3 I1116 10:03:34.822572 7 log.go:181] (0xc0068d06e0) (0xc001296460) Create stream I1116 10:03:34.822581 7 log.go:181] (0xc0068d06e0) (0xc001296460) Stream added, broadcasting: 5 I1116 10:03:34.823275 7 log.go:181] (0xc0068d06e0) Reply frame received for 5 I1116 10:03:34.893184 7 log.go:181] (0xc0068d06e0) Data frame received for 5 I1116 10:03:34.893236 7 log.go:181] (0xc001296460) (5) Data frame handling I1116 10:03:34.893270 7 log.go:181] (0xc0068d06e0) Data frame received for 3 I1116 10:03:34.893313 7 log.go:181] (0xc005074be0) (3) Data frame handling I1116 10:03:34.895376 7 log.go:181] (0xc0068d06e0) Data frame received for 1 I1116 10:03:34.895457 7 log.go:181] (0xc001296320) (1) Data frame handling I1116 10:03:34.895519 7 log.go:181] (0xc001296320) (1) Data frame sent I1116 10:03:34.895566 7 log.go:181] (0xc0068d06e0) (0xc001296320) Stream removed, broadcasting: 1 I1116 10:03:34.895614 7 log.go:181] (0xc0068d06e0) Go away received I1116 10:03:34.895752 7 log.go:181] (0xc0068d06e0) (0xc001296320) Stream removed, broadcasting: 1 I1116 10:03:34.895786 7 log.go:181] (0xc0068d06e0) (0xc005074be0) Stream removed, broadcasting: 3 I1116 10:03:34.895805 7 log.go:181] (0xc0068d06e0) (0xc001296460) Stream removed, broadcasting: 5 STEP: test for file in mounted path Nov 16 10:03:34.899: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3410 PodName:var-expansion-4b835387-acfd-4691-8038-b2dff26b33db ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 10:03:34.900: INFO: >>> kubeConfig: /root/.kube/config I1116 10:03:34.932044 7 log.go:181] (0xc00003b130) (0xc001c83ea0) Create stream I1116 10:03:34.932070 7 log.go:181] (0xc00003b130) (0xc001c83ea0) Stream added, broadcasting: 1 I1116 10:03:34.934169 7 log.go:181] (0xc00003b130) Reply frame received for 1 I1116 10:03:34.934199 7 log.go:181] (0xc00003b130) (0xc0001bd5e0) Create stream I1116 10:03:34.934210 7 log.go:181] (0xc00003b130) (0xc0001bd5e0) Stream added, broadcasting: 3 I1116 10:03:34.935141 7 log.go:181] (0xc00003b130) Reply frame received for 3 I1116 10:03:34.935200 7 log.go:181] (0xc00003b130) (0xc001296500) Create stream I1116 10:03:34.935226 7 log.go:181] (0xc00003b130) (0xc001296500) Stream added, broadcasting: 5 I1116 10:03:34.936204 7 log.go:181] (0xc00003b130) Reply frame received for 5 I1116 10:03:34.996710 7 log.go:181] (0xc00003b130) Data frame received for 3 I1116 10:03:34.996742 7 log.go:181] (0xc0001bd5e0) (3) Data frame handling I1116 10:03:34.996784 7 log.go:181] (0xc00003b130) Data frame received for 5 I1116 10:03:34.996806 7 log.go:181] (0xc001296500) (5) Data frame handling I1116 10:03:34.998443 7 log.go:181] (0xc00003b130) Data frame received for 1 I1116 10:03:34.998539 7 log.go:181] (0xc001c83ea0) (1) Data frame handling I1116 10:03:34.998599 7 log.go:181] (0xc001c83ea0) (1) Data frame sent I1116 10:03:34.998631 7 log.go:181] (0xc00003b130) (0xc001c83ea0) Stream removed, broadcasting: 1 I1116 10:03:34.998664 7 log.go:181] (0xc00003b130) Go away received I1116 10:03:34.998785 7 log.go:181] (0xc00003b130) (0xc001c83ea0) Stream removed, broadcasting: 1 I1116 10:03:34.998823 7 log.go:181] (0xc00003b130) (0xc0001bd5e0) Stream removed, broadcasting: 3 I1116 10:03:34.998853 7 log.go:181] (0xc00003b130) (0xc001296500) Stream removed, broadcasting: 5 STEP: updating the annotation value Nov 16 10:03:35.506: INFO: Successfully updated pod "var-expansion-4b835387-acfd-4691-8038-b2dff26b33db" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Nov 16 10:03:35.529: INFO: Deleting pod "var-expansion-4b835387-acfd-4691-8038-b2dff26b33db" in namespace "var-expansion-3410" Nov 16 10:03:35.534: INFO: Wait up to 5m0s for pod "var-expansion-4b835387-acfd-4691-8038-b2dff26b33db" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:04:17.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3410" for this suite. • [SLOW TEST:47.012 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":178,"skipped":3125,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:04:17.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8ea51ca6-0dc2-4c2c-9000-b17988f6c80a STEP: Creating a pod to test consume secrets Nov 16 10:04:17.671: INFO: Waiting up to 5m0s for pod "pod-secrets-e518f59e-a469-46ba-be29-e2c16a258278" in namespace "secrets-567" to be "Succeeded or Failed" Nov 16 10:04:17.682: INFO: Pod "pod-secrets-e518f59e-a469-46ba-be29-e2c16a258278": Phase="Pending", Reason="", readiness=false. Elapsed: 10.592398ms Nov 16 10:04:19.705: INFO: Pod "pod-secrets-e518f59e-a469-46ba-be29-e2c16a258278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034181943s Nov 16 10:04:21.711: INFO: Pod "pod-secrets-e518f59e-a469-46ba-be29-e2c16a258278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039540206s STEP: Saw pod success Nov 16 10:04:21.711: INFO: Pod "pod-secrets-e518f59e-a469-46ba-be29-e2c16a258278" satisfied condition "Succeeded or Failed" Nov 16 10:04:21.714: INFO: Trying to get logs from node latest-worker pod pod-secrets-e518f59e-a469-46ba-be29-e2c16a258278 container secret-volume-test: STEP: delete the pod Nov 16 10:04:21.732: INFO: Waiting for pod pod-secrets-e518f59e-a469-46ba-be29-e2c16a258278 to disappear Nov 16 10:04:21.752: INFO: Pod pod-secrets-e518f59e-a469-46ba-be29-e2c16a258278 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:04:21.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-567" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":179,"skipped":3126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:04:21.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Nov 16 10:04:21.827: INFO: >>> kubeConfig: /root/.kube/config Nov 16 10:04:24.770: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:04:35.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3526" for this suite. • [SLOW TEST:13.856 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":180,"skipped":3160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:04:35.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:04:35.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6063" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":181,"skipped":3184,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:04:35.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:04:35.762: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e059bad8-9fb7-433a-8d76-0ddbbd3eab4c" in namespace "downward-api-5577" to be "Succeeded or Failed" Nov 16 10:04:35.777: INFO: Pod "downwardapi-volume-e059bad8-9fb7-433a-8d76-0ddbbd3eab4c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.863883ms Nov 16 10:04:37.781: INFO: Pod "downwardapi-volume-e059bad8-9fb7-433a-8d76-0ddbbd3eab4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018830928s Nov 16 10:04:39.786: INFO: Pod "downwardapi-volume-e059bad8-9fb7-433a-8d76-0ddbbd3eab4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02375573s STEP: Saw pod success Nov 16 10:04:39.786: INFO: Pod "downwardapi-volume-e059bad8-9fb7-433a-8d76-0ddbbd3eab4c" satisfied condition "Succeeded or Failed" Nov 16 10:04:39.789: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e059bad8-9fb7-433a-8d76-0ddbbd3eab4c container client-container: STEP: delete the pod Nov 16 10:04:39.821: INFO: Waiting for pod downwardapi-volume-e059bad8-9fb7-433a-8d76-0ddbbd3eab4c to disappear Nov 16 10:04:39.831: INFO: Pod downwardapi-volume-e059bad8-9fb7-433a-8d76-0ddbbd3eab4c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:04:39.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5577" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":182,"skipped":3206,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:04:39.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Nov 16 10:04:39.899: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Nov 16 10:04:40.883: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Nov 16 10:04:43.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117880, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117880, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117881, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117880, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:04:45.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117880, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117880, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117881, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741117880, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:04:48.091: INFO: Waited 727.444168ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:04:48.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2747" for this suite. • [SLOW TEST:8.882 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":183,"skipped":3208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:04:48.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Nov 16 10:04:48.779: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix601087252/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:04:48.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7576" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":184,"skipped":3234,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:04:48.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 16 10:04:48.918: INFO: Waiting up to 5m0s for pod "downward-api-a04529ee-76bf-440e-a059-eb4561a002c5" in namespace "downward-api-4142" to be "Succeeded or Failed" Nov 16 10:04:48.950: INFO: Pod "downward-api-a04529ee-76bf-440e-a059-eb4561a002c5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.628345ms Nov 16 10:04:50.972: INFO: Pod "downward-api-a04529ee-76bf-440e-a059-eb4561a002c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05380163s Nov 16 10:04:52.976: INFO: Pod "downward-api-a04529ee-76bf-440e-a059-eb4561a002c5": Phase="Running", Reason="", readiness=true. Elapsed: 4.058402883s Nov 16 10:04:54.981: INFO: Pod "downward-api-a04529ee-76bf-440e-a059-eb4561a002c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063286539s STEP: Saw pod success Nov 16 10:04:54.981: INFO: Pod "downward-api-a04529ee-76bf-440e-a059-eb4561a002c5" satisfied condition "Succeeded or Failed" Nov 16 10:04:54.983: INFO: Trying to get logs from node latest-worker pod downward-api-a04529ee-76bf-440e-a059-eb4561a002c5 container dapi-container: STEP: delete the pod Nov 16 10:04:55.034: INFO: Waiting for pod downward-api-a04529ee-76bf-440e-a059-eb4561a002c5 to disappear Nov 16 10:04:55.048: INFO: Pod downward-api-a04529ee-76bf-440e-a059-eb4561a002c5 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:04:55.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4142" for this suite. • [SLOW TEST:6.233 seconds] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":185,"skipped":3252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:04:55.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1116 10:04:56.460771 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 16 10:05:58.478: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:05:58.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2330" for this suite. • [SLOW TEST:63.393 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":186,"skipped":3301,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:05:58.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:05:58.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-809974c5-10c8-498e-bd8c-a9ce2c4d65b3" in namespace "projected-5060" to be "Succeeded or Failed" Nov 16 10:05:58.617: INFO: Pod "downwardapi-volume-809974c5-10c8-498e-bd8c-a9ce2c4d65b3": Phase="Pending", Reason="", readiness=false. Elapsed: 54.751162ms Nov 16 10:06:00.623: INFO: Pod "downwardapi-volume-809974c5-10c8-498e-bd8c-a9ce2c4d65b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060949956s Nov 16 10:06:02.627: INFO: Pod "downwardapi-volume-809974c5-10c8-498e-bd8c-a9ce2c4d65b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064460611s STEP: Saw pod success Nov 16 10:06:02.627: INFO: Pod "downwardapi-volume-809974c5-10c8-498e-bd8c-a9ce2c4d65b3" satisfied condition "Succeeded or Failed" Nov 16 10:06:02.629: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-809974c5-10c8-498e-bd8c-a9ce2c4d65b3 container client-container: STEP: delete the pod Nov 16 10:06:02.756: INFO: Waiting for pod downwardapi-volume-809974c5-10c8-498e-bd8c-a9ce2c4d65b3 to disappear Nov 16 10:06:02.798: INFO: Pod downwardapi-volume-809974c5-10c8-498e-bd8c-a9ce2c4d65b3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:06:02.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5060" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":187,"skipped":3307,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:06:02.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-1ee65e4d-a0d5-40cb-b28b-d6a902f2716c STEP: Creating a pod to test consume configMaps Nov 16 10:06:02.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f7ee653-9bf5-48a7-9e09-170af50caf67" in namespace "projected-3944" to be "Succeeded or Failed" Nov 16 10:06:02.953: INFO: Pod "pod-projected-configmaps-7f7ee653-9bf5-48a7-9e09-170af50caf67": Phase="Pending", Reason="", readiness=false. Elapsed: 9.614984ms Nov 16 10:06:04.958: INFO: Pod "pod-projected-configmaps-7f7ee653-9bf5-48a7-9e09-170af50caf67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013945924s Nov 16 10:06:06.962: INFO: Pod "pod-projected-configmaps-7f7ee653-9bf5-48a7-9e09-170af50caf67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018620364s STEP: Saw pod success Nov 16 10:06:06.962: INFO: Pod "pod-projected-configmaps-7f7ee653-9bf5-48a7-9e09-170af50caf67" satisfied condition "Succeeded or Failed" Nov 16 10:06:06.966: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7f7ee653-9bf5-48a7-9e09-170af50caf67 container projected-configmap-volume-test: STEP: delete the pod Nov 16 10:06:06.993: INFO: Waiting for pod pod-projected-configmaps-7f7ee653-9bf5-48a7-9e09-170af50caf67 to disappear Nov 16 10:06:07.010: INFO: Pod pod-projected-configmaps-7f7ee653-9bf5-48a7-9e09-170af50caf67 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:06:07.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3944" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":3311,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:06:07.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1116 10:06:17.138837 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 16 10:07:19.155: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:19.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5204" for this suite. • [SLOW TEST:72.143 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":189,"skipped":3318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:19.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:07:19.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2" in namespace "downward-api-8391" to be "Succeeded or Failed" Nov 16 10:07:19.275: INFO: Pod "downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.691564ms Nov 16 10:07:21.376: INFO: Pod "downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119186192s Nov 16 10:07:23.381: INFO: Pod "downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2": Phase="Running", Reason="", readiness=true. Elapsed: 4.123812332s Nov 16 10:07:25.387: INFO: Pod "downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129617125s STEP: Saw pod success Nov 16 10:07:25.387: INFO: Pod "downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2" satisfied condition "Succeeded or Failed" Nov 16 10:07:25.390: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2 container client-container: STEP: delete the pod Nov 16 10:07:25.421: INFO: Waiting for pod downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2 to disappear Nov 16 10:07:25.463: INFO: Pod downwardapi-volume-6a88a306-83ff-42ad-80cd-608876dfabf2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:25.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8391" for this suite. • [SLOW TEST:6.315 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":3413,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:25.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 16 10:07:25.583: INFO: Waiting up to 5m0s for pod "pod-301cf506-97ec-4b4d-ab25-e6686dd1fd97" in namespace "emptydir-1678" to be "Succeeded or Failed" Nov 16 10:07:25.600: INFO: Pod "pod-301cf506-97ec-4b4d-ab25-e6686dd1fd97": Phase="Pending", Reason="", readiness=false. Elapsed: 16.247616ms Nov 16 10:07:27.603: INFO: Pod "pod-301cf506-97ec-4b4d-ab25-e6686dd1fd97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019690621s Nov 16 10:07:29.607: INFO: Pod "pod-301cf506-97ec-4b4d-ab25-e6686dd1fd97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023602368s STEP: Saw pod success Nov 16 10:07:29.607: INFO: Pod "pod-301cf506-97ec-4b4d-ab25-e6686dd1fd97" satisfied condition "Succeeded or Failed" Nov 16 10:07:29.610: INFO: Trying to get logs from node latest-worker pod pod-301cf506-97ec-4b4d-ab25-e6686dd1fd97 container test-container: STEP: delete the pod Nov 16 10:07:29.661: INFO: Waiting for pod pod-301cf506-97ec-4b4d-ab25-e6686dd1fd97 to disappear Nov 16 10:07:29.665: INFO: Pod pod-301cf506-97ec-4b4d-ab25-e6686dd1fd97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:29.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1678" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":191,"skipped":3416,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:29.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 16 10:07:29.772: INFO: Waiting up to 5m0s for pod "pod-10069caf-8d0d-4673-bce6-ab261c0d0b90" in namespace "emptydir-999" to be "Succeeded or Failed" Nov 16 10:07:29.779: INFO: Pod "pod-10069caf-8d0d-4673-bce6-ab261c0d0b90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38208ms Nov 16 10:07:31.856: INFO: Pod "pod-10069caf-8d0d-4673-bce6-ab261c0d0b90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083382999s Nov 16 10:07:33.860: INFO: Pod "pod-10069caf-8d0d-4673-bce6-ab261c0d0b90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087433379s STEP: Saw pod success Nov 16 10:07:33.860: INFO: Pod "pod-10069caf-8d0d-4673-bce6-ab261c0d0b90" satisfied condition "Succeeded or Failed" Nov 16 10:07:33.862: INFO: Trying to get logs from node latest-worker pod pod-10069caf-8d0d-4673-bce6-ab261c0d0b90 container test-container: STEP: delete the pod Nov 16 10:07:33.972: INFO: Waiting for pod pod-10069caf-8d0d-4673-bce6-ab261c0d0b90 to disappear Nov 16 10:07:33.984: INFO: Pod pod-10069caf-8d0d-4673-bce6-ab261c0d0b90 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:33.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-999" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":192,"skipped":3422,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:33.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 16 10:07:34.100: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 16 10:07:34.128: INFO: Waiting for terminating namespaces to be deleted... Nov 16 10:07:34.131: INFO: Logging pods the apiserver thinks is on node latest-worker before test Nov 16 10:07:34.135: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 10:07:34.135: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 10:07:34.135: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 10:07:34.135: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 10:07:34.135: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Nov 16 10:07:34.139: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Nov 16 10:07:34.139: INFO: Container coredns ready: true, restart count 0 Nov 16 10:07:34.139: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 10:07:34.139: INFO: Container coredns ready: true, restart count 0 Nov 16 10:07:34.139: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 10:07:34.139: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 10:07:34.139: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 10:07:34.139: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 10:07:34.139: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 10:07:34.139: INFO: Container local-path-provisioner ready: true, restart count 1 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Nov 16 10:07:34.242: INFO: Pod coredns-f9fd979d6-l8q79 requesting resource cpu=100m on Node latest-worker2 Nov 16 10:07:34.242: INFO: Pod coredns-f9fd979d6-rhzs8 requesting resource cpu=100m on Node latest-worker2 Nov 16 10:07:34.242: INFO: Pod kindnet-g7vp5 requesting resource cpu=100m on Node latest-worker2 Nov 16 10:07:34.242: INFO: Pod kindnet-jwscz requesting resource cpu=100m on Node latest-worker Nov 16 10:07:34.242: INFO: Pod kube-proxy-bmxmj requesting resource cpu=0m on Node latest-worker2 Nov 16 10:07:34.242: INFO: Pod kube-proxy-cg6dw requesting resource cpu=0m on Node latest-worker Nov 16 10:07:34.242: INFO: Pod local-path-provisioner-78776bfc44-6tlk5 requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Nov 16 10:07:34.242: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Nov 16 10:07:34.250: INFO: Creating a pod which consumes cpu=10990m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-b2829575-2b32-4675-87fb-f5d032520926.1647f51357153938], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b2829575-2b32-4675-87fb-f5d032520926.1647f513b3cad3dd], Reason = [Started], Message = [Started container filler-pod-b2829575-2b32-4675-87fb-f5d032520926] STEP: Considering event: Type = [Normal], Name = [filler-pod-b2829575-2b32-4675-87fb-f5d032520926.1647f5139788a4fe], Reason = [Created], Message = [Created container filler-pod-b2829575-2b32-4675-87fb-f5d032520926] STEP: Considering event: Type = [Normal], Name = [filler-pod-b68b596e-452b-4cfd-858b-8a26dd46580d.1647f51309b00897], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8506/filler-pod-b68b596e-452b-4cfd-858b-8a26dd46580d to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-b68b596e-452b-4cfd-858b-8a26dd46580d.1647f51396f72337], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b68b596e-452b-4cfd-858b-8a26dd46580d.1647f513d39a1400], Reason = [Started], Message = [Started container filler-pod-b68b596e-452b-4cfd-858b-8a26dd46580d] STEP: Considering event: Type = [Normal], Name = [filler-pod-b68b596e-452b-4cfd-858b-8a26dd46580d.1647f513c3d86dba], Reason = [Created], Message = [Created container filler-pod-b68b596e-452b-4cfd-858b-8a26dd46580d] STEP: Considering event: Type = [Normal], Name = [filler-pod-b2829575-2b32-4675-87fb-f5d032520926.1647f5130799b9c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8506/filler-pod-b2829575-2b32-4675-87fb-f5d032520926 to latest-worker] STEP: Considering event: Type = [Warning], Name = [additional-pod.1647f513f966b15e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1647f513fb5b2f82], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:39.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8506" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.398 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":193,"skipped":3431,"failed":0} S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:39.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 16 10:07:39.477: INFO: Waiting up to 5m0s for pod "downward-api-14f96f94-5c3b-4933-a22b-8dc6001f19b0" in namespace "downward-api-7195" to be "Succeeded or Failed" Nov 16 10:07:39.480: INFO: Pod "downward-api-14f96f94-5c3b-4933-a22b-8dc6001f19b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.716931ms Nov 16 10:07:41.484: INFO: Pod "downward-api-14f96f94-5c3b-4933-a22b-8dc6001f19b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006817461s Nov 16 10:07:43.489: INFO: Pod "downward-api-14f96f94-5c3b-4933-a22b-8dc6001f19b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012021995s STEP: Saw pod success Nov 16 10:07:43.489: INFO: Pod "downward-api-14f96f94-5c3b-4933-a22b-8dc6001f19b0" satisfied condition "Succeeded or Failed" Nov 16 10:07:43.492: INFO: Trying to get logs from node latest-worker pod downward-api-14f96f94-5c3b-4933-a22b-8dc6001f19b0 container dapi-container: STEP: delete the pod Nov 16 10:07:43.507: INFO: Waiting for pod downward-api-14f96f94-5c3b-4933-a22b-8dc6001f19b0 to disappear Nov 16 10:07:43.525: INFO: Pod downward-api-14f96f94-5c3b-4933-a22b-8dc6001f19b0 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:43.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7195" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":194,"skipped":3432,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:43.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 16 10:07:43.626: INFO: Waiting up to 5m0s for pod "pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17" in namespace "emptydir-5041" to be "Succeeded or Failed" Nov 16 10:07:43.631: INFO: Pod "pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17": Phase="Pending", Reason="", readiness=false. Elapsed: 5.170925ms Nov 16 10:07:45.732: INFO: Pod "pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106233441s Nov 16 10:07:47.736: INFO: Pod "pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17": Phase="Running", Reason="", readiness=true. Elapsed: 4.109633364s Nov 16 10:07:49.739: INFO: Pod "pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11309675s STEP: Saw pod success Nov 16 10:07:49.739: INFO: Pod "pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17" satisfied condition "Succeeded or Failed" Nov 16 10:07:49.742: INFO: Trying to get logs from node latest-worker pod pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17 container test-container: STEP: delete the pod Nov 16 10:07:49.824: INFO: Waiting for pod pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17 to disappear Nov 16 10:07:49.882: INFO: Pod pod-b3299046-83b4-4e7a-afaa-dfb8b90eea17 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:49.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5041" for this suite. • [SLOW TEST:6.358 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":3436,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:49.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:07:50.291: INFO: Waiting up to 5m0s for pod "busybox-user-65534-eb048bd8-1dd6-4e6b-bd32-e62ad0165d97" in namespace "security-context-test-5246" to be "Succeeded or Failed" Nov 16 10:07:50.294: INFO: Pod "busybox-user-65534-eb048bd8-1dd6-4e6b-bd32-e62ad0165d97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.108038ms Nov 16 10:07:52.301: INFO: Pod "busybox-user-65534-eb048bd8-1dd6-4e6b-bd32-e62ad0165d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009773246s Nov 16 10:07:54.307: INFO: Pod "busybox-user-65534-eb048bd8-1dd6-4e6b-bd32-e62ad0165d97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015638887s Nov 16 10:07:54.307: INFO: Pod "busybox-user-65534-eb048bd8-1dd6-4e6b-bd32-e62ad0165d97" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:54.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5246" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":196,"skipped":3437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:54.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:07:54.452: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:07:55.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1887" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":197,"skipped":3491,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:07:55.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:07:55.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4908' Nov 16 10:07:58.545: INFO: stderr: "" Nov 16 10:07:58.545: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Nov 16 10:07:58.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4908' Nov 16 10:07:58.878: INFO: stderr: "" Nov 16 10:07:58.878: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 16 10:07:59.884: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:07:59.884: INFO: Found 0 / 1 Nov 16 10:08:01.020: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:08:01.020: INFO: Found 0 / 1 Nov 16 10:08:01.883: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:08:01.883: INFO: Found 1 / 1 Nov 16 10:08:01.883: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 16 10:08:01.887: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:08:01.887: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 16 10:08:01.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe pod agnhost-primary-gxjrl --namespace=kubectl-4908' Nov 16 10:08:02.009: INFO: stderr: "" Nov 16 10:08:02.010: INFO: stdout: "Name: agnhost-primary-gxjrl\nNamespace: kubectl-4908\nPriority: 0\nNode: latest-worker/172.18.0.15\nStart Time: Mon, 16 Nov 2020 10:07:58 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.176\nIPs:\n IP: 10.244.2.176\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://f59f94097e182028d560159f9a76bbd5621913df1ee484dbc56e725220390017\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 16 Nov 2020 10:08:01 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-kp46d (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-kp46d:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-kp46d\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-4908/agnhost-primary-gxjrl to latest-worker\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Nov 16 10:08:02.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-4908' Nov 16 10:08:02.158: INFO: stderr: "" Nov 16 10:08:02.158: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4908\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-gxjrl\n" Nov 16 10:08:02.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-4908' Nov 16 10:08:02.279: INFO: stderr: "" Nov 16 10:08:02.279: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4908\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.105.110.76\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.176:6379\nSession Affinity: None\nEvents: \n" Nov 16 10:08:02.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe node latest-control-plane' Nov 16 10:08:02.420: INFO: stderr: "" Nov 16 10:08:02.420: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 10 Oct 2020 08:58:25 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 16 Nov 2020 10:08:01 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 16 Nov 2020 10:04:18 +0000 Sat, 10 Oct 2020 08:58:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 16 Nov 2020 10:04:18 +0000 Sat, 10 Oct 2020 08:58:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 16 Nov 2020 10:04:18 +0000 Sat, 10 Oct 2020 08:58:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 16 Nov 2020 10:04:18 +0000 Sat, 10 Oct 2020 08:59:37 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.16\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: caa260c6a3b946279ec1bc906e7a2062\n System UUID: e7cbf5f9-e358-4304-a4ab-c83e6879c290\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37d\n kube-system kindnet-qsltg 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 37d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 37d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 37d\n kube-system kube-proxy-vm99r 0 (0%) 0 (0%) 0 (0%) 0 (0%) 37d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 37d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Nov 16 10:08:02.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe namespace kubectl-4908' Nov 16 10:08:02.567: INFO: stderr: "" Nov 16 10:08:02.568: INFO: stdout: "Name: kubectl-4908\nLabels: e2e-framework=kubectl\n e2e-run=c575447e-156c-40dd-8db6-565ac96c74c4\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:08:02.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4908" for this suite. • [SLOW TEST:7.527 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":198,"skipped":3495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:08:02.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:08:06.790: INFO: Waiting up to 5m0s for pod "client-envvars-b5135d14-544c-429b-8a77-ade89a88d0a6" in namespace "pods-7712" to be "Succeeded or Failed" Nov 16 10:08:06.822: INFO: Pod "client-envvars-b5135d14-544c-429b-8a77-ade89a88d0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.345942ms Nov 16 10:08:08.843: INFO: Pod "client-envvars-b5135d14-544c-429b-8a77-ade89a88d0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052934351s Nov 16 10:08:10.978: INFO: Pod "client-envvars-b5135d14-544c-429b-8a77-ade89a88d0a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.188066679s STEP: Saw pod success Nov 16 10:08:10.978: INFO: Pod "client-envvars-b5135d14-544c-429b-8a77-ade89a88d0a6" satisfied condition "Succeeded or Failed" Nov 16 10:08:10.982: INFO: Trying to get logs from node latest-worker pod client-envvars-b5135d14-544c-429b-8a77-ade89a88d0a6 container env3cont: STEP: delete the pod Nov 16 10:08:11.159: INFO: Waiting for pod client-envvars-b5135d14-544c-429b-8a77-ade89a88d0a6 to disappear Nov 16 10:08:11.166: INFO: Pod client-envvars-b5135d14-544c-429b-8a77-ade89a88d0a6 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:08:11.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7712" for this suite. • [SLOW TEST:8.575 seconds] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:08:11.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Nov 16 10:08:11.254: INFO: namespace kubectl-7438 Nov 16 10:08:11.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7438' Nov 16 10:08:11.543: INFO: stderr: "" Nov 16 10:08:11.543: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 16 10:08:12.548: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:08:12.548: INFO: Found 0 / 1 Nov 16 10:08:13.621: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:08:13.622: INFO: Found 0 / 1 Nov 16 10:08:14.548: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:08:14.548: INFO: Found 0 / 1 Nov 16 10:08:15.550: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:08:15.550: INFO: Found 1 / 1 Nov 16 10:08:15.550: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 16 10:08:15.552: INFO: Selector matched 1 pods for map[app:agnhost] Nov 16 10:08:15.552: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 16 10:08:15.552: INFO: wait on agnhost-primary startup in kubectl-7438 Nov 16 10:08:15.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs agnhost-primary-jrrhv agnhost-primary --namespace=kubectl-7438' Nov 16 10:08:15.663: INFO: stderr: "" Nov 16 10:08:15.663: INFO: stdout: "Paused\n" STEP: exposing RC Nov 16 10:08:15.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7438' Nov 16 10:08:15.815: INFO: stderr: "" Nov 16 10:08:15.815: INFO: stdout: "service/rm2 exposed\n" Nov 16 10:08:15.882: INFO: Service rm2 in namespace kubectl-7438 found. STEP: exposing service Nov 16 10:08:17.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7438' Nov 16 10:08:18.049: INFO: stderr: "" Nov 16 10:08:18.049: INFO: stdout: "service/rm3 exposed\n" Nov 16 10:08:18.065: INFO: Service rm3 in namespace kubectl-7438 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:08:20.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7438" for this suite. • [SLOW TEST:8.905 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":200,"skipped":3584,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:08:20.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Nov 16 10:08:20.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7103' Nov 16 10:08:20.268: INFO: stderr: "" Nov 16 10:08:20.268: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Nov 16 10:08:20.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-7103' Nov 16 10:08:20.397: INFO: stderr: "" Nov 16 10:08:20.397: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-11-16T10:08:20Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-11-16T10:08:20Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-11-16T10:08:20Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7103\",\n \"resourceVersion\": \"9790412\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7103/pods/e2e-test-httpd-pod\",\n \"uid\": \"1194492b-3e04-4e8d-a5e3-3d27b58653f6\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-g5q2p\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-g5q2p\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-g5q2p\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-16T10:08:20Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-16T10:08:20Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-16T10:08:20Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-16T10:08:20Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-11-16T10:08:20Z\"\n }\n}\n" Nov 16 10:08:20.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-7103' Nov 16 10:08:20.675: INFO: stderr: "W1116 10:08:20.469154 3292 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Nov 16 10:08:20.675: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Nov 16 10:08:20.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7103' Nov 16 10:08:25.625: INFO: stderr: "" Nov 16 10:08:25.625: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:08:25.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7103" for this suite. • [SLOW TEST:5.559 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":201,"skipped":3599,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:08:25.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:08:42.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8113" for this suite. • [SLOW TEST:16.582 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":202,"skipped":3611,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:08:42.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 16 10:08:42.360: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:42.433: INFO: Number of nodes with available pods: 0 Nov 16 10:08:42.433: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:43.439: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:43.443: INFO: Number of nodes with available pods: 0 Nov 16 10:08:43.443: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:44.440: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:44.443: INFO: Number of nodes with available pods: 0 Nov 16 10:08:44.443: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:45.543: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:45.546: INFO: Number of nodes with available pods: 0 Nov 16 10:08:45.546: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:46.438: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:46.440: INFO: Number of nodes with available pods: 1 Nov 16 10:08:46.440: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:47.437: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:47.439: INFO: Number of nodes with available pods: 2 Nov 16 10:08:47.439: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Nov 16 10:08:47.457: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:47.474: INFO: Number of nodes with available pods: 1 Nov 16 10:08:47.474: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:48.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:48.485: INFO: Number of nodes with available pods: 1 Nov 16 10:08:48.485: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:49.479: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:49.484: INFO: Number of nodes with available pods: 1 Nov 16 10:08:49.484: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:50.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:50.485: INFO: Number of nodes with available pods: 1 Nov 16 10:08:50.485: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:51.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:51.484: INFO: Number of nodes with available pods: 1 Nov 16 10:08:51.484: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:52.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:52.485: INFO: Number of nodes with available pods: 1 Nov 16 10:08:52.485: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:53.480: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:53.483: INFO: Number of nodes with available pods: 1 Nov 16 10:08:53.483: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:54.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:54.485: INFO: Number of nodes with available pods: 1 Nov 16 10:08:54.485: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:55.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:55.484: INFO: Number of nodes with available pods: 1 Nov 16 10:08:55.484: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:56.480: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:56.484: INFO: Number of nodes with available pods: 1 Nov 16 10:08:56.484: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:57.481: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:57.485: INFO: Number of nodes with available pods: 1 Nov 16 10:08:57.485: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:58.628: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:58.631: INFO: Number of nodes with available pods: 1 Nov 16 10:08:58.631: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:08:59.479: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:08:59.482: INFO: Number of nodes with available pods: 2 Nov 16 10:08:59.482: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-627, will wait for the garbage collector to delete the pods Nov 16 10:08:59.545: INFO: Deleting DaemonSet.extensions daemon-set took: 6.935674ms Nov 16 10:08:59.945: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.188284ms Nov 16 10:09:05.653: INFO: Number of nodes with available pods: 0 Nov 16 10:09:05.653: INFO: Number of running nodes: 0, number of available pods: 0 Nov 16 10:09:05.656: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-627/daemonsets","resourceVersion":"9790659"},"items":null} Nov 16 10:09:05.659: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-627/pods","resourceVersion":"9790659"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:09:05.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-627" for this suite. • [SLOW TEST:23.455 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":203,"skipped":3629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:09:05.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:09:05.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1949" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":204,"skipped":3681,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:09:05.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Nov 16 10:09:06.006: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Nov 16 10:09:17.835: INFO: >>> kubeConfig: /root/.kube/config Nov 16 10:09:20.805: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:09:31.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9967" for this suite. • [SLOW TEST:25.667 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":205,"skipped":3681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:09:31.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Nov 16 10:09:31.740: INFO: Waiting up to 5m0s for pod "var-expansion-3b0d2682-6a20-4683-b891-4de6a3f6745e" in namespace "var-expansion-9338" to be "Succeeded or Failed" Nov 16 10:09:31.744: INFO: Pod "var-expansion-3b0d2682-6a20-4683-b891-4de6a3f6745e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.547247ms Nov 16 10:09:33.747: INFO: Pod "var-expansion-3b0d2682-6a20-4683-b891-4de6a3f6745e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007265299s Nov 16 10:09:35.751: INFO: Pod "var-expansion-3b0d2682-6a20-4683-b891-4de6a3f6745e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011168049s STEP: Saw pod success Nov 16 10:09:35.751: INFO: Pod "var-expansion-3b0d2682-6a20-4683-b891-4de6a3f6745e" satisfied condition "Succeeded or Failed" Nov 16 10:09:35.755: INFO: Trying to get logs from node latest-worker pod var-expansion-3b0d2682-6a20-4683-b891-4de6a3f6745e container dapi-container: STEP: delete the pod Nov 16 10:09:35.776: INFO: Waiting for pod var-expansion-3b0d2682-6a20-4683-b891-4de6a3f6745e to disappear Nov 16 10:09:35.779: INFO: Pod var-expansion-3b0d2682-6a20-4683-b891-4de6a3f6745e no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:09:35.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9338" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3709,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:09:35.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-4500 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4500 to expose endpoints map[] Nov 16 10:09:35.918: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Nov 16 10:09:36.934: INFO: successfully validated that service endpoint-test2 in namespace services-4500 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-4500 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4500 to expose endpoints map[pod1:[80]] Nov 16 10:09:39.987: INFO: successfully validated that service endpoint-test2 in namespace services-4500 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-4500 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4500 to expose endpoints map[pod1:[80] pod2:[80]] Nov 16 10:09:44.044: INFO: successfully validated that service endpoint-test2 in namespace services-4500 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-4500 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4500 to expose endpoints map[pod2:[80]] Nov 16 10:09:44.076: INFO: successfully validated that service endpoint-test2 in namespace services-4500 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-4500 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4500 to expose endpoints map[] Nov 16 10:09:45.164: INFO: successfully validated that service endpoint-test2 in namespace services-4500 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:09:45.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4500" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.419 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":207,"skipped":3729,"failed":0} [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:09:45.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:09:45.347: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b61395ab-65a8-453b-9815-d84826f7b2ed", Controller:(*bool)(0xc005297222), BlockOwnerDeletion:(*bool)(0xc005297223)}} Nov 16 10:09:45.423: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"cae1e52e-9c0f-48c7-975e-8069a77b2b57", Controller:(*bool)(0xc003c97a6a), BlockOwnerDeletion:(*bool)(0xc003c97a6b)}} Nov 16 10:09:45.435: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6ad8208f-455a-471c-ab3c-d5db90828195", Controller:(*bool)(0xc00529754a), BlockOwnerDeletion:(*bool)(0xc00529754b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:09:50.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7700" for this suite. • [SLOW TEST:5.273 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":208,"skipped":3729,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:09:50.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-60e6212e-bdf1-404e-ad94-78fedea08a85 STEP: Creating configMap with name cm-test-opt-upd-726a4cea-a34e-43bc-afd6-741f5453c574 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-60e6212e-bdf1-404e-ad94-78fedea08a85 STEP: Updating configmap cm-test-opt-upd-726a4cea-a34e-43bc-afd6-741f5453c574 STEP: Creating configMap with name cm-test-opt-create-32552ab5-bd46-4ba5-a425-3304004324f8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:09:58.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1080" for this suite. • [SLOW TEST:8.264 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3734,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:09:58.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 16 10:09:58.852: INFO: Waiting up to 5m0s for pod "pod-db5d1c2e-d5f5-4844-8b02-0e9637a38700" in namespace "emptydir-9507" to be "Succeeded or Failed" Nov 16 10:09:58.859: INFO: Pod "pod-db5d1c2e-d5f5-4844-8b02-0e9637a38700": Phase="Pending", Reason="", readiness=false. Elapsed: 6.805889ms Nov 16 10:10:00.865: INFO: Pod "pod-db5d1c2e-d5f5-4844-8b02-0e9637a38700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012937286s Nov 16 10:10:02.870: INFO: Pod "pod-db5d1c2e-d5f5-4844-8b02-0e9637a38700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018139974s STEP: Saw pod success Nov 16 10:10:02.870: INFO: Pod "pod-db5d1c2e-d5f5-4844-8b02-0e9637a38700" satisfied condition "Succeeded or Failed" Nov 16 10:10:02.874: INFO: Trying to get logs from node latest-worker pod pod-db5d1c2e-d5f5-4844-8b02-0e9637a38700 container test-container: STEP: delete the pod Nov 16 10:10:02.933: INFO: Waiting for pod pod-db5d1c2e-d5f5-4844-8b02-0e9637a38700 to disappear Nov 16 10:10:02.937: INFO: Pod pod-db5d1c2e-d5f5-4844-8b02-0e9637a38700 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:10:02.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9507" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3739,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:10:02.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3556 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3556 I1116 10:10:03.393924 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3556, replica count: 2 I1116 10:10:06.444342 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 10:10:09.444602 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 10:10:09.444: INFO: Creating new exec pod Nov 16 10:10:16.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3556 execpodk99ml -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Nov 16 10:10:16.750: INFO: stderr: "I1116 10:10:16.627393 3328 log.go:181] (0xc0009f94a0) (0xc000c17cc0) Create stream\nI1116 10:10:16.627447 3328 log.go:181] (0xc0009f94a0) (0xc000c17cc0) Stream added, broadcasting: 1\nI1116 10:10:16.628768 3328 log.go:181] (0xc0009f94a0) Reply frame received for 1\nI1116 10:10:16.628805 3328 log.go:181] (0xc0009f94a0) (0xc000c17d60) Create stream\nI1116 10:10:16.628817 3328 log.go:181] (0xc0009f94a0) (0xc000c17d60) Stream added, broadcasting: 3\nI1116 10:10:16.629512 3328 log.go:181] (0xc0009f94a0) Reply frame received for 3\nI1116 10:10:16.629557 3328 log.go:181] (0xc0009f94a0) (0xc0006ca1e0) Create stream\nI1116 10:10:16.629585 3328 log.go:181] (0xc0009f94a0) (0xc0006ca1e0) Stream added, broadcasting: 5\nI1116 10:10:16.630305 3328 log.go:181] (0xc0009f94a0) Reply frame received for 5\nI1116 10:10:16.741767 3328 log.go:181] (0xc0009f94a0) Data frame received for 5\nI1116 10:10:16.741791 3328 log.go:181] (0xc0006ca1e0) (5) Data frame handling\nI1116 10:10:16.741798 3328 log.go:181] (0xc0006ca1e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1116 10:10:16.742081 3328 log.go:181] (0xc0009f94a0) Data frame received for 5\nI1116 10:10:16.742095 3328 log.go:181] (0xc0006ca1e0) (5) Data frame handling\nI1116 10:10:16.742129 3328 log.go:181] (0xc0009f94a0) Data frame received for 3\nI1116 10:10:16.742180 3328 log.go:181] (0xc000c17d60) (3) Data frame handling\nI1116 10:10:16.744046 3328 log.go:181] (0xc0009f94a0) Data frame received for 1\nI1116 10:10:16.744063 3328 log.go:181] (0xc000c17cc0) (1) Data frame handling\nI1116 10:10:16.744071 3328 log.go:181] (0xc000c17cc0) (1) Data frame sent\nI1116 10:10:16.744080 3328 log.go:181] (0xc0009f94a0) (0xc000c17cc0) Stream removed, broadcasting: 1\nI1116 10:10:16.744122 3328 log.go:181] (0xc0009f94a0) Go away received\nI1116 10:10:16.744378 3328 log.go:181] (0xc0009f94a0) (0xc000c17cc0) Stream removed, broadcasting: 1\nI1116 10:10:16.744390 3328 log.go:181] (0xc0009f94a0) (0xc000c17d60) Stream removed, broadcasting: 3\nI1116 10:10:16.744395 3328 log.go:181] (0xc0009f94a0) (0xc0006ca1e0) Stream removed, broadcasting: 5\n" Nov 16 10:10:16.750: INFO: stdout: "" Nov 16 10:10:16.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3556 execpodk99ml -- /bin/sh -x -c nc -zv -t -w 2 10.97.202.28 80' Nov 16 10:10:16.950: INFO: stderr: "I1116 10:10:16.871902 3346 log.go:181] (0xc000f31130) (0xc000e908c0) Create stream\nI1116 10:10:16.871986 3346 log.go:181] (0xc000f31130) (0xc000e908c0) Stream added, broadcasting: 1\nI1116 10:10:16.879592 3346 log.go:181] (0xc000f31130) Reply frame received for 1\nI1116 10:10:16.879762 3346 log.go:181] (0xc000f31130) (0xc000e90000) Create stream\nI1116 10:10:16.879854 3346 log.go:181] (0xc000f31130) (0xc000e90000) Stream added, broadcasting: 3\nI1116 10:10:16.882011 3346 log.go:181] (0xc000f31130) Reply frame received for 3\nI1116 10:10:16.882058 3346 log.go:181] (0xc000f31130) (0xc0001a1ea0) Create stream\nI1116 10:10:16.882081 3346 log.go:181] (0xc000f31130) (0xc0001a1ea0) Stream added, broadcasting: 5\nI1116 10:10:16.882766 3346 log.go:181] (0xc000f31130) Reply frame received for 5\nI1116 10:10:16.941158 3346 log.go:181] (0xc000f31130) Data frame received for 3\nI1116 10:10:16.941196 3346 log.go:181] (0xc000e90000) (3) Data frame handling\nI1116 10:10:16.941222 3346 log.go:181] (0xc000f31130) Data frame received for 5\nI1116 10:10:16.941232 3346 log.go:181] (0xc0001a1ea0) (5) Data frame handling\nI1116 10:10:16.941243 3346 log.go:181] (0xc0001a1ea0) (5) Data frame sent\n+ nc -zv -t -w 2 10.97.202.28 80\nConnection to 10.97.202.28 80 port [tcp/http] succeeded!\nI1116 10:10:16.941320 3346 log.go:181] (0xc000f31130) Data frame received for 5\nI1116 10:10:16.941343 3346 log.go:181] (0xc0001a1ea0) (5) Data frame handling\nI1116 10:10:16.942821 3346 log.go:181] (0xc000f31130) Data frame received for 1\nI1116 10:10:16.942858 3346 log.go:181] (0xc000e908c0) (1) Data frame handling\nI1116 10:10:16.942896 3346 log.go:181] (0xc000e908c0) (1) Data frame sent\nI1116 10:10:16.943010 3346 log.go:181] (0xc000f31130) (0xc000e908c0) Stream removed, broadcasting: 1\nI1116 10:10:16.943130 3346 log.go:181] (0xc000f31130) Go away received\nI1116 10:10:16.943535 3346 log.go:181] (0xc000f31130) (0xc000e908c0) Stream removed, broadcasting: 1\nI1116 10:10:16.943564 3346 log.go:181] (0xc000f31130) (0xc000e90000) Stream removed, broadcasting: 3\nI1116 10:10:16.943577 3346 log.go:181] (0xc000f31130) (0xc0001a1ea0) Stream removed, broadcasting: 5\n" Nov 16 10:10:16.950: INFO: stdout: "" Nov 16 10:10:16.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3556 execpodk99ml -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30507' Nov 16 10:10:17.148: INFO: stderr: "I1116 10:10:17.082282 3363 log.go:181] (0xc0007f1970) (0xc0007e8d20) Create stream\nI1116 10:10:17.082335 3363 log.go:181] (0xc0007f1970) (0xc0007e8d20) Stream added, broadcasting: 1\nI1116 10:10:17.085861 3363 log.go:181] (0xc0007f1970) Reply frame received for 1\nI1116 10:10:17.085893 3363 log.go:181] (0xc0007f1970) (0xc0007e8000) Create stream\nI1116 10:10:17.085906 3363 log.go:181] (0xc0007f1970) (0xc0007e8000) Stream added, broadcasting: 3\nI1116 10:10:17.086708 3363 log.go:181] (0xc0007f1970) Reply frame received for 3\nI1116 10:10:17.086753 3363 log.go:181] (0xc0007f1970) (0xc0007e80a0) Create stream\nI1116 10:10:17.086764 3363 log.go:181] (0xc0007f1970) (0xc0007e80a0) Stream added, broadcasting: 5\nI1116 10:10:17.087491 3363 log.go:181] (0xc0007f1970) Reply frame received for 5\nI1116 10:10:17.138310 3363 log.go:181] (0xc0007f1970) Data frame received for 5\nI1116 10:10:17.138341 3363 log.go:181] (0xc0007e80a0) (5) Data frame handling\nI1116 10:10:17.138360 3363 log.go:181] (0xc0007e80a0) (5) Data frame sent\nI1116 10:10:17.138371 3363 log.go:181] (0xc0007f1970) Data frame received for 5\nI1116 10:10:17.138391 3363 log.go:181] (0xc0007e80a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30507\nConnection to 172.18.0.15 30507 port [tcp/30507] succeeded!\nI1116 10:10:17.138423 3363 log.go:181] (0xc0007e80a0) (5) Data frame sent\nI1116 10:10:17.138865 3363 log.go:181] (0xc0007f1970) Data frame received for 5\nI1116 10:10:17.138931 3363 log.go:181] (0xc0007e80a0) (5) Data frame handling\nI1116 10:10:17.138963 3363 log.go:181] (0xc0007f1970) Data frame received for 3\nI1116 10:10:17.138977 3363 log.go:181] (0xc0007e8000) (3) Data frame handling\nI1116 10:10:17.140956 3363 log.go:181] (0xc0007f1970) Data frame received for 1\nI1116 10:10:17.140995 3363 log.go:181] (0xc0007e8d20) (1) Data frame handling\nI1116 10:10:17.141023 3363 log.go:181] (0xc0007e8d20) (1) Data frame sent\nI1116 10:10:17.141042 3363 log.go:181] (0xc0007f1970) (0xc0007e8d20) Stream removed, broadcasting: 1\nI1116 10:10:17.141063 3363 log.go:181] (0xc0007f1970) Go away received\nI1116 10:10:17.141499 3363 log.go:181] (0xc0007f1970) (0xc0007e8d20) Stream removed, broadcasting: 1\nI1116 10:10:17.141530 3363 log.go:181] (0xc0007f1970) (0xc0007e8000) Stream removed, broadcasting: 3\nI1116 10:10:17.141550 3363 log.go:181] (0xc0007f1970) (0xc0007e80a0) Stream removed, broadcasting: 5\n" Nov 16 10:10:17.148: INFO: stdout: "" Nov 16 10:10:17.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3556 execpodk99ml -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30507' Nov 16 10:10:17.357: INFO: stderr: "I1116 10:10:17.274220 3381 log.go:181] (0xc000764fd0) (0xc000e408c0) Create stream\nI1116 10:10:17.274275 3381 log.go:181] (0xc000764fd0) (0xc000e408c0) Stream added, broadcasting: 1\nI1116 10:10:17.279422 3381 log.go:181] (0xc000764fd0) Reply frame received for 1\nI1116 10:10:17.279475 3381 log.go:181] (0xc000764fd0) (0xc0007a5ea0) Create stream\nI1116 10:10:17.279492 3381 log.go:181] (0xc000764fd0) (0xc0007a5ea0) Stream added, broadcasting: 3\nI1116 10:10:17.280491 3381 log.go:181] (0xc000764fd0) Reply frame received for 3\nI1116 10:10:17.280533 3381 log.go:181] (0xc000764fd0) (0xc000e40000) Create stream\nI1116 10:10:17.280545 3381 log.go:181] (0xc000764fd0) (0xc000e40000) Stream added, broadcasting: 5\nI1116 10:10:17.281650 3381 log.go:181] (0xc000764fd0) Reply frame received for 5\nI1116 10:10:17.349555 3381 log.go:181] (0xc000764fd0) Data frame received for 3\nI1116 10:10:17.349580 3381 log.go:181] (0xc0007a5ea0) (3) Data frame handling\nI1116 10:10:17.349605 3381 log.go:181] (0xc000764fd0) Data frame received for 5\nI1116 10:10:17.349627 3381 log.go:181] (0xc000e40000) (5) Data frame handling\nI1116 10:10:17.349646 3381 log.go:181] (0xc000e40000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30507\nConnection to 172.18.0.14 30507 port [tcp/30507] succeeded!\nI1116 10:10:17.349664 3381 log.go:181] (0xc000764fd0) Data frame received for 5\nI1116 10:10:17.349768 3381 log.go:181] (0xc000e40000) (5) Data frame handling\nI1116 10:10:17.351245 3381 log.go:181] (0xc000764fd0) Data frame received for 1\nI1116 10:10:17.351263 3381 log.go:181] (0xc000e408c0) (1) Data frame handling\nI1116 10:10:17.351278 3381 log.go:181] (0xc000e408c0) (1) Data frame sent\nI1116 10:10:17.351289 3381 log.go:181] (0xc000764fd0) (0xc000e408c0) Stream removed, broadcasting: 1\nI1116 10:10:17.351300 3381 log.go:181] (0xc000764fd0) Go away received\nI1116 10:10:17.351709 3381 log.go:181] (0xc000764fd0) (0xc000e408c0) Stream removed, broadcasting: 1\nI1116 10:10:17.351728 3381 log.go:181] (0xc000764fd0) (0xc0007a5ea0) Stream removed, broadcasting: 3\nI1116 10:10:17.351737 3381 log.go:181] (0xc000764fd0) (0xc000e40000) Stream removed, broadcasting: 5\n" Nov 16 10:10:17.357: INFO: stdout: "" Nov 16 10:10:17.357: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:10:17.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3556" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.509 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":211,"skipped":3741,"failed":0} SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:10:17.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-5b494d2c-2b1f-49af-980e-3e0c727e6e17 STEP: Creating secret with name s-test-opt-upd-b3a95417-0a40-4af2-a2b9-a70980edd7f8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5b494d2c-2b1f-49af-980e-3e0c727e6e17 STEP: Updating secret s-test-opt-upd-b3a95417-0a40-4af2-a2b9-a70980edd7f8 STEP: Creating secret with name s-test-opt-create-fd3c52df-d436-4fbd-9006-8cb4f9399b23 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:10:25.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-329" for this suite. • [SLOW TEST:8.297 seconds] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":212,"skipped":3744,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:10:25.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Nov 16 10:10:25.947: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5259 /api/v1/namespaces/watch-5259/configmaps/e2e-watch-test-watch-closed 8106dad7-7ec3-445a-b6fd-2d6ad6c07bf8 9791281 0 2020-11-16 10:10:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-11-16 10:10:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 10:10:25.947: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5259 /api/v1/namespaces/watch-5259/configmaps/e2e-watch-test-watch-closed 8106dad7-7ec3-445a-b6fd-2d6ad6c07bf8 9791282 0 2020-11-16 10:10:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-11-16 10:10:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Nov 16 10:10:26.150: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5259 /api/v1/namespaces/watch-5259/configmaps/e2e-watch-test-watch-closed 8106dad7-7ec3-445a-b6fd-2d6ad6c07bf8 9791283 0 2020-11-16 10:10:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-11-16 10:10:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 10:10:26.150: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5259 /api/v1/namespaces/watch-5259/configmaps/e2e-watch-test-watch-closed 8106dad7-7ec3-445a-b6fd-2d6ad6c07bf8 9791284 0 2020-11-16 10:10:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-11-16 10:10:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:10:26.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5259" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":213,"skipped":3752,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:10:26.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-512d3b94-0e30-4521-a40f-636997100f84 STEP: Creating a pod to test consume configMaps Nov 16 10:10:26.365: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-755dd286-7608-4a21-a6c4-fe4575a67dc6" in namespace "projected-3174" to be "Succeeded or Failed" Nov 16 10:10:27.105: INFO: Pod "pod-projected-configmaps-755dd286-7608-4a21-a6c4-fe4575a67dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 740.046457ms Nov 16 10:10:29.109: INFO: Pod "pod-projected-configmaps-755dd286-7608-4a21-a6c4-fe4575a67dc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.743838392s Nov 16 10:10:31.117: INFO: Pod "pod-projected-configmaps-755dd286-7608-4a21-a6c4-fe4575a67dc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.751658298s STEP: Saw pod success Nov 16 10:10:31.117: INFO: Pod "pod-projected-configmaps-755dd286-7608-4a21-a6c4-fe4575a67dc6" satisfied condition "Succeeded or Failed" Nov 16 10:10:31.120: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-755dd286-7608-4a21-a6c4-fe4575a67dc6 container projected-configmap-volume-test: STEP: delete the pod Nov 16 10:10:31.377: INFO: Waiting for pod pod-projected-configmaps-755dd286-7608-4a21-a6c4-fe4575a67dc6 to disappear Nov 16 10:10:31.425: INFO: Pod pod-projected-configmaps-755dd286-7608-4a21-a6c4-fe4575a67dc6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:10:31.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3174" for this suite. • [SLOW TEST:5.192 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":214,"skipped":3753,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:10:31.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8691.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8691.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 10:10:38.167: INFO: DNS probes using dns-test-13a0177a-3754-4f4e-aa12-9761e60ebeb3 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8691.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8691.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 10:10:44.364: INFO: File wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:10:44.368: INFO: File jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:10:44.368: INFO: Lookups using dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 failed for: [wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local] Nov 16 10:10:49.385: INFO: File wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:10:49.390: INFO: File jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:10:49.390: INFO: Lookups using dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 failed for: [wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local] Nov 16 10:10:54.373: INFO: File wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:10:54.377: INFO: File jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:10:54.377: INFO: Lookups using dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 failed for: [wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local] Nov 16 10:10:59.373: INFO: File wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:10:59.376: INFO: File jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:10:59.376: INFO: Lookups using dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 failed for: [wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local] Nov 16 10:11:04.373: INFO: File wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:11:04.377: INFO: File jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local from pod dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 16 10:11:04.377: INFO: Lookups using dns-8691/dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 failed for: [wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local] Nov 16 10:11:09.437: INFO: DNS probes using dns-test-d1fe6ea5-1918-4c60-b284-b965a969a582 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8691.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8691.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8691.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8691.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 10:11:18.128: INFO: DNS probes using dns-test-0fd65c1e-3687-487e-8974-5d2b5bd58b08 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:11:18.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8691" for this suite. • [SLOW TEST:46.976 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":215,"skipped":3755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:11:18.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-1498e5bc-02af-4e16-a8eb-fa894e6e6710 in namespace container-probe-1280 Nov 16 10:11:24.533: INFO: Started pod busybox-1498e5bc-02af-4e16-a8eb-fa894e6e6710 in namespace container-probe-1280 STEP: checking the pod's current state and verifying that restartCount is present Nov 16 10:11:24.536: INFO: Initial restart count of pod busybox-1498e5bc-02af-4e16-a8eb-fa894e6e6710 is 0 Nov 16 10:12:18.941: INFO: Restart count of pod container-probe-1280/busybox-1498e5bc-02af-4e16-a8eb-fa894e6e6710 is now 1 (54.404570141s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:12:19.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1280" for this suite. • [SLOW TEST:60.646 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3780,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:12:19.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a14b13f0-47ec-4b94-a9f5-b9b902609e89 STEP: Creating a pod to test consume secrets Nov 16 10:12:19.167: INFO: Waiting up to 5m0s for pod "pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa" in namespace "secrets-6309" to be "Succeeded or Failed" Nov 16 10:12:19.206: INFO: Pod "pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa": Phase="Pending", Reason="", readiness=false. Elapsed: 39.091692ms Nov 16 10:12:21.209: INFO: Pod "pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042191543s Nov 16 10:12:23.213: INFO: Pod "pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa": Phase="Running", Reason="", readiness=true. Elapsed: 4.046848567s Nov 16 10:12:25.241: INFO: Pod "pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074654242s STEP: Saw pod success Nov 16 10:12:25.241: INFO: Pod "pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa" satisfied condition "Succeeded or Failed" Nov 16 10:12:25.253: INFO: Trying to get logs from node latest-worker pod pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa container secret-env-test: STEP: delete the pod Nov 16 10:12:25.338: INFO: Waiting for pod pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa to disappear Nov 16 10:12:25.343: INFO: Pod pod-secrets-a09eafa0-bc4e-4bc0-921b-930a8ff824fa no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:12:25.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6309" for this suite. • [SLOW TEST:6.297 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3783,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:12:25.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:12:25.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46684f0c-1a78-4c5a-8871-b4963cfc0170" in namespace "downward-api-1045" to be "Succeeded or Failed" Nov 16 10:12:25.463: INFO: Pod "downwardapi-volume-46684f0c-1a78-4c5a-8871-b4963cfc0170": Phase="Pending", Reason="", readiness=false. Elapsed: 3.797375ms Nov 16 10:12:27.468: INFO: Pod "downwardapi-volume-46684f0c-1a78-4c5a-8871-b4963cfc0170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008758233s Nov 16 10:12:29.634: INFO: Pod "downwardapi-volume-46684f0c-1a78-4c5a-8871-b4963cfc0170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174857795s STEP: Saw pod success Nov 16 10:12:29.634: INFO: Pod "downwardapi-volume-46684f0c-1a78-4c5a-8871-b4963cfc0170" satisfied condition "Succeeded or Failed" Nov 16 10:12:29.638: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-46684f0c-1a78-4c5a-8871-b4963cfc0170 container client-container: STEP: delete the pod Nov 16 10:12:29.822: INFO: Waiting for pod downwardapi-volume-46684f0c-1a78-4c5a-8871-b4963cfc0170 to disappear Nov 16 10:12:29.832: INFO: Pod downwardapi-volume-46684f0c-1a78-4c5a-8871-b4963cfc0170 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:12:29.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1045" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":218,"skipped":3798,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:12:29.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 10:12:30.625: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 10:12:32.634: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118350, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118350, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118350, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118350, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 10:12:35.706: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:12:35.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5999" for this suite. STEP: Destroying namespace "webhook-5999-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.164 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":219,"skipped":3811,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:12:36.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:12:36.134: INFO: Pod name rollover-pod: Found 0 pods out of 1 Nov 16 10:12:41.145: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 16 10:12:41.145: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Nov 16 10:12:43.151: INFO: Creating deployment "test-rollover-deployment" Nov 16 10:12:43.177: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Nov 16 10:12:45.184: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Nov 16 10:12:45.190: INFO: Ensure that both replica sets have 1 created replica Nov 16 10:12:45.200: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Nov 16 10:12:45.207: INFO: Updating deployment test-rollover-deployment Nov 16 10:12:45.207: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Nov 16 10:12:47.226: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Nov 16 10:12:47.233: INFO: Make sure deployment "test-rollover-deployment" is complete Nov 16 10:12:47.239: INFO: all replica sets need to contain the pod-template-hash label Nov 16 10:12:47.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118365, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:12:49.248: INFO: all replica sets need to contain the pod-template-hash label Nov 16 10:12:49.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118368, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:12:51.248: INFO: all replica sets need to contain the pod-template-hash label Nov 16 10:12:51.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118368, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:12:53.246: INFO: all replica sets need to contain the pod-template-hash label Nov 16 10:12:53.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118368, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:12:55.248: INFO: all replica sets need to contain the pod-template-hash label Nov 16 10:12:55.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118368, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:12:57.247: INFO: all replica sets need to contain the pod-template-hash label Nov 16 10:12:57.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118368, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118363, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:12:59.246: INFO: Nov 16 10:12:59.246: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 16 10:12:59.253: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4678 /apis/apps/v1/namespaces/deployment-4678/deployments/test-rollover-deployment 97a2f753-b692-42e2-91d9-b174d07f64e7 9792118 2 2020-11-16 10:12:43 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-11-16 10:12:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-16 10:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055e6938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-11-16 10:12:43 +0000 UTC,LastTransitionTime:2020-11-16 10:12:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-11-16 10:12:58 +0000 UTC,LastTransitionTime:2020-11-16 10:12:43 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 16 10:12:59.256: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-4678 /apis/apps/v1/namespaces/deployment-4678/replicasets/test-rollover-deployment-5797c7764 29c47cc2-226e-4a8e-af46-4c151ea0b539 9792105 2 2020-11-16 10:12:45 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 97a2f753-b692-42e2-91d9-b174d07f64e7 0xc002c2a570 0xc002c2a571}] [] [{kube-controller-manager Update apps/v1 2020-11-16 10:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97a2f753-b692-42e2-91d9-b174d07f64e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002c2a5e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:12:59.256: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Nov 16 10:12:59.256: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4678 /apis/apps/v1/namespaces/deployment-4678/replicasets/test-rollover-controller 05c44bc5-f1f0-4756-a7e1-d1ac36973408 9792117 2 2020-11-16 10:12:36 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 97a2f753-b692-42e2-91d9-b174d07f64e7 0xc002c2a45f 0xc002c2a470}] [] [{e2e.test Update apps/v1 2020-11-16 10:12:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-16 10:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97a2f753-b692-42e2-91d9-b174d07f64e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c2a508 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:12:59.256: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-4678 /apis/apps/v1/namespaces/deployment-4678/replicasets/test-rollover-deployment-78bc8b888c 07ba31e9-e856-49dd-85b6-8d60ce76e6d4 9792060 2 2020-11-16 10:12:43 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 97a2f753-b692-42e2-91d9-b174d07f64e7 0xc002c2a657 0xc002c2a658}] [] [{kube-controller-manager Update apps/v1 2020-11-16 10:12:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97a2f753-b692-42e2-91d9-b174d07f64e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002c2a6e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:12:59.259: INFO: Pod "test-rollover-deployment-5797c7764-mtpvl" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-mtpvl test-rollover-deployment-5797c7764- deployment-4678 /api/v1/namespaces/deployment-4678/pods/test-rollover-deployment-5797c7764-mtpvl ed11b410-9c7f-4f35-a75a-a43c00fc883f 9792075 0 2020-11-16 10:12:45 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 29c47cc2-226e-4a8e-af46-4c151ea0b539 0xc002c2ac60 0xc002c2ac61}] [] [{kube-controller-manager Update v1 2020-11-16 10:12:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"29c47cc2-226e-4a8e-af46-4c151ea0b539\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:12:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.201\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rr427,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rr427,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rr427,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:12:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:12:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:12:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:12:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.201,StartTime:2020-11-16 10:12:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:12:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://8b99738d1bfdfa9fb13018720678912a58aa73e27cfc70ddb019a9a5134b334b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:12:59.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4678" for this suite. • [SLOW TEST:23.259 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":220,"skipped":3815,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:12:59.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 16 10:13:00.597: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 16 10:13:02.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118380, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118380, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118380, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118380, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 10:13:05.686: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:13:05.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:13:06.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2321" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.688 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":221,"skipped":3830,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:13:06.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Nov 16 10:13:14.222: INFO: 0 pods remaining Nov 16 10:13:14.222: INFO: 0 pods has nil DeletionTimestamp Nov 16 10:13:14.222: INFO: Nov 16 10:13:15.136: INFO: 0 pods remaining Nov 16 10:13:15.136: INFO: 0 pods has nil DeletionTimestamp Nov 16 10:13:15.136: INFO: STEP: Gathering metrics W1116 10:13:18.302119 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 16 10:14:20.660: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:14:20.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3263" for this suite. • [SLOW TEST:73.714 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":222,"skipped":3834,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:14:20.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Nov 16 10:16:21.376: INFO: Successfully updated pod "var-expansion-f2a7cfe3-e7ac-423b-afd0-16b313d36afb" STEP: waiting for pod running STEP: deleting the pod gracefully Nov 16 10:16:23.387: INFO: Deleting pod "var-expansion-f2a7cfe3-e7ac-423b-afd0-16b313d36afb" in namespace "var-expansion-8217" Nov 16 10:16:23.392: INFO: Wait up to 5m0s for pod "var-expansion-f2a7cfe3-e7ac-423b-afd0-16b313d36afb" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:17:07.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8217" for this suite. • [SLOW TEST:166.746 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":223,"skipped":3845,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:17:07.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:17:07.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1aea2b3f-64f1-4317-aa82-2b22ff308887" in namespace "downward-api-3604" to be "Succeeded or Failed" Nov 16 10:17:07.565: INFO: Pod "downwardapi-volume-1aea2b3f-64f1-4317-aa82-2b22ff308887": Phase="Pending", Reason="", readiness=false. Elapsed: 31.086406ms Nov 16 10:17:09.854: INFO: Pod "downwardapi-volume-1aea2b3f-64f1-4317-aa82-2b22ff308887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319615523s Nov 16 10:17:11.858: INFO: Pod "downwardapi-volume-1aea2b3f-64f1-4317-aa82-2b22ff308887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.323901794s STEP: Saw pod success Nov 16 10:17:11.858: INFO: Pod "downwardapi-volume-1aea2b3f-64f1-4317-aa82-2b22ff308887" satisfied condition "Succeeded or Failed" Nov 16 10:17:11.860: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1aea2b3f-64f1-4317-aa82-2b22ff308887 container client-container: STEP: delete the pod Nov 16 10:17:12.004: INFO: Waiting for pod downwardapi-volume-1aea2b3f-64f1-4317-aa82-2b22ff308887 to disappear Nov 16 10:17:12.020: INFO: Pod downwardapi-volume-1aea2b3f-64f1-4317-aa82-2b22ff308887 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:17:12.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3604" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":224,"skipped":3875,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:17:12.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:17:12.118: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Nov 16 10:17:12.128: INFO: Number of nodes with available pods: 0 Nov 16 10:17:12.128: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Nov 16 10:17:12.196: INFO: Number of nodes with available pods: 0 Nov 16 10:17:12.196: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:13.201: INFO: Number of nodes with available pods: 0 Nov 16 10:17:13.201: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:14.296: INFO: Number of nodes with available pods: 0 Nov 16 10:17:14.296: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:15.201: INFO: Number of nodes with available pods: 0 Nov 16 10:17:15.201: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:16.200: INFO: Number of nodes with available pods: 1 Nov 16 10:17:16.200: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Nov 16 10:17:16.230: INFO: Number of nodes with available pods: 1 Nov 16 10:17:16.230: INFO: Number of running nodes: 0, number of available pods: 1 Nov 16 10:17:17.234: INFO: Number of nodes with available pods: 0 Nov 16 10:17:17.234: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Nov 16 10:17:17.303: INFO: Number of nodes with available pods: 0 Nov 16 10:17:17.303: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:18.306: INFO: Number of nodes with available pods: 0 Nov 16 10:17:18.306: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:19.307: INFO: Number of nodes with available pods: 0 Nov 16 10:17:19.307: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:20.307: INFO: Number of nodes with available pods: 0 Nov 16 10:17:20.307: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:21.307: INFO: Number of nodes with available pods: 0 Nov 16 10:17:21.307: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:22.308: INFO: Number of nodes with available pods: 0 Nov 16 10:17:22.308: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:23.307: INFO: Number of nodes with available pods: 0 Nov 16 10:17:23.307: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:24.306: INFO: Number of nodes with available pods: 0 Nov 16 10:17:24.306: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:25.314: INFO: Number of nodes with available pods: 0 Nov 16 10:17:25.314: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:26.339: INFO: Number of nodes with available pods: 0 Nov 16 10:17:26.339: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:27.308: INFO: Number of nodes with available pods: 0 Nov 16 10:17:27.308: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:28.309: INFO: Number of nodes with available pods: 0 Nov 16 10:17:28.309: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:17:29.307: INFO: Number of nodes with available pods: 1 Nov 16 10:17:29.307: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2481, will wait for the garbage collector to delete the pods Nov 16 10:17:29.374: INFO: Deleting DaemonSet.extensions daemon-set took: 6.951882ms Nov 16 10:17:29.774: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.255153ms Nov 16 10:17:35.697: INFO: Number of nodes with available pods: 0 Nov 16 10:17:35.697: INFO: Number of running nodes: 0, number of available pods: 0 Nov 16 10:17:35.700: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2481/daemonsets","resourceVersion":"9793288"},"items":null} Nov 16 10:17:35.702: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2481/pods","resourceVersion":"9793288"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:17:35.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2481" for this suite. • [SLOW TEST:23.718 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":225,"skipped":3892,"failed":0} SSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:17:35.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Nov 16 10:17:40.344: INFO: Successfully updated pod "adopt-release-nmx8d" STEP: Checking that the Job readopts the Pod Nov 16 10:17:40.344: INFO: Waiting up to 15m0s for pod "adopt-release-nmx8d" in namespace "job-3094" to be "adopted" Nov 16 10:17:40.368: INFO: Pod "adopt-release-nmx8d": Phase="Running", Reason="", readiness=true. Elapsed: 23.1073ms Nov 16 10:17:42.883: INFO: Pod "adopt-release-nmx8d": Phase="Running", Reason="", readiness=true. Elapsed: 2.538673068s Nov 16 10:17:42.883: INFO: Pod "adopt-release-nmx8d" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Nov 16 10:17:43.394: INFO: Successfully updated pod "adopt-release-nmx8d" STEP: Checking that the Job releases the Pod Nov 16 10:17:43.394: INFO: Waiting up to 15m0s for pod "adopt-release-nmx8d" in namespace "job-3094" to be "released" Nov 16 10:17:43.423: INFO: Pod "adopt-release-nmx8d": Phase="Running", Reason="", readiness=true. Elapsed: 28.65459ms Nov 16 10:17:47.285: INFO: Pod "adopt-release-nmx8d": Phase="Running", Reason="", readiness=true. Elapsed: 3.890812201s Nov 16 10:17:47.285: INFO: Pod "adopt-release-nmx8d" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:17:47.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3094" for this suite. • [SLOW TEST:11.790 seconds] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":226,"skipped":3895,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:17:47.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 10:17:49.089: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 10:17:51.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118669, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118669, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118669, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118669, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 10:17:54.153: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:17:54.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4635" for this suite. STEP: Destroying namespace "webhook-4635-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.911 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":227,"skipped":3901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:17:54.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8e7649fe-05d3-4d69-9d6d-18c4372c4dd9 STEP: Creating a pod to test consume configMaps Nov 16 10:17:54.509: INFO: Waiting up to 5m0s for pod "pod-configmaps-d82cf034-72bc-4efb-b609-12bbff7871a1" in namespace "configmap-1848" to be "Succeeded or Failed" Nov 16 10:17:54.548: INFO: Pod "pod-configmaps-d82cf034-72bc-4efb-b609-12bbff7871a1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.028504ms Nov 16 10:17:56.553: INFO: Pod "pod-configmaps-d82cf034-72bc-4efb-b609-12bbff7871a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043672764s Nov 16 10:17:58.558: INFO: Pod "pod-configmaps-d82cf034-72bc-4efb-b609-12bbff7871a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048827811s STEP: Saw pod success Nov 16 10:17:58.558: INFO: Pod "pod-configmaps-d82cf034-72bc-4efb-b609-12bbff7871a1" satisfied condition "Succeeded or Failed" Nov 16 10:17:58.562: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d82cf034-72bc-4efb-b609-12bbff7871a1 container configmap-volume-test: STEP: delete the pod Nov 16 10:17:58.600: INFO: Waiting for pod pod-configmaps-d82cf034-72bc-4efb-b609-12bbff7871a1 to disappear Nov 16 10:17:58.625: INFO: Pod pod-configmaps-d82cf034-72bc-4efb-b609-12bbff7871a1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:17:58.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1848" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":228,"skipped":3937,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:17:58.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5820 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 16 10:17:58.724: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 16 10:17:58.800: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 10:18:00.803: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 10:18:02.805: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:18:04.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:18:06.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:18:08.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:18:10.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:18:12.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:18:14.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:18:16.804: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:18:18.804: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 16 10:18:18.810: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 16 10:18:22.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.216:8080/dial?request=hostname&protocol=udp&host=10.244.2.215&port=8081&tries=1'] Namespace:pod-network-test-5820 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 10:18:22.909: INFO: >>> kubeConfig: /root/.kube/config I1116 10:18:22.939545 7 log.go:181] (0xc0032486e0) (0xc002f663c0) Create stream I1116 10:18:22.939569 7 log.go:181] (0xc0032486e0) (0xc002f663c0) Stream added, broadcasting: 1 I1116 10:18:22.942050 7 log.go:181] (0xc0032486e0) Reply frame received for 1 I1116 10:18:22.942083 7 log.go:181] (0xc0032486e0) (0xc0034157c0) Create stream I1116 10:18:22.942094 7 log.go:181] (0xc0032486e0) (0xc0034157c0) Stream added, broadcasting: 3 I1116 10:18:22.943084 7 log.go:181] (0xc0032486e0) Reply frame received for 3 I1116 10:18:22.943129 7 log.go:181] (0xc0032486e0) (0xc0034f12c0) Create stream I1116 10:18:22.943153 7 log.go:181] (0xc0032486e0) (0xc0034f12c0) Stream added, broadcasting: 5 I1116 10:18:22.944010 7 log.go:181] (0xc0032486e0) Reply frame received for 5 I1116 10:18:23.142461 7 log.go:181] (0xc0032486e0) Data frame received for 3 I1116 10:18:23.142492 7 log.go:181] (0xc0034157c0) (3) Data frame handling I1116 10:18:23.142513 7 log.go:181] (0xc0034157c0) (3) Data frame sent I1116 10:18:23.142855 7 log.go:181] (0xc0032486e0) Data frame received for 5 I1116 10:18:23.142889 7 log.go:181] (0xc0034f12c0) (5) Data frame handling I1116 10:18:23.142936 7 log.go:181] (0xc0032486e0) Data frame received for 3 I1116 10:18:23.142953 7 log.go:181] (0xc0034157c0) (3) Data frame handling I1116 10:18:23.145195 7 log.go:181] (0xc0032486e0) Data frame received for 1 I1116 10:18:23.145237 7 log.go:181] (0xc002f663c0) (1) Data frame handling I1116 10:18:23.145280 7 log.go:181] (0xc002f663c0) (1) Data frame sent I1116 10:18:23.145313 7 log.go:181] (0xc0032486e0) (0xc002f663c0) Stream removed, broadcasting: 1 I1116 10:18:23.145342 7 log.go:181] (0xc0032486e0) Go away received I1116 10:18:23.145430 7 log.go:181] (0xc0032486e0) (0xc002f663c0) Stream removed, broadcasting: 1 I1116 10:18:23.145500 7 log.go:181] (0xc0032486e0) (0xc0034157c0) Stream removed, broadcasting: 3 I1116 10:18:23.145565 7 log.go:181] (0xc0032486e0) (0xc0034f12c0) Stream removed, broadcasting: 5 Nov 16 10:18:23.145: INFO: Waiting for responses: map[] Nov 16 10:18:23.165: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.216:8080/dial?request=hostname&protocol=udp&host=10.244.1.25&port=8081&tries=1'] Namespace:pod-network-test-5820 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 10:18:23.165: INFO: >>> kubeConfig: /root/.kube/config I1116 10:18:23.200442 7 log.go:181] (0xc003248e70) (0xc002f66820) Create stream I1116 10:18:23.200470 7 log.go:181] (0xc003248e70) (0xc002f66820) Stream added, broadcasting: 1 I1116 10:18:23.202402 7 log.go:181] (0xc003248e70) Reply frame received for 1 I1116 10:18:23.202431 7 log.go:181] (0xc003248e70) (0xc003415860) Create stream I1116 10:18:23.202441 7 log.go:181] (0xc003248e70) (0xc003415860) Stream added, broadcasting: 3 I1116 10:18:23.203279 7 log.go:181] (0xc003248e70) Reply frame received for 3 I1116 10:18:23.203301 7 log.go:181] (0xc003248e70) (0xc003415900) Create stream I1116 10:18:23.203307 7 log.go:181] (0xc003248e70) (0xc003415900) Stream added, broadcasting: 5 I1116 10:18:23.204201 7 log.go:181] (0xc003248e70) Reply frame received for 5 I1116 10:18:23.256017 7 log.go:181] (0xc003248e70) Data frame received for 3 I1116 10:18:23.256060 7 log.go:181] (0xc003415860) (3) Data frame handling I1116 10:18:23.256073 7 log.go:181] (0xc003415860) (3) Data frame sent I1116 10:18:23.256080 7 log.go:181] (0xc003248e70) Data frame received for 3 I1116 10:18:23.256092 7 log.go:181] (0xc003415860) (3) Data frame handling I1116 10:18:23.256109 7 log.go:181] (0xc003248e70) Data frame received for 5 I1116 10:18:23.256119 7 log.go:181] (0xc003415900) (5) Data frame handling I1116 10:18:23.257954 7 log.go:181] (0xc003248e70) Data frame received for 1 I1116 10:18:23.258005 7 log.go:181] (0xc002f66820) (1) Data frame handling I1116 10:18:23.258033 7 log.go:181] (0xc002f66820) (1) Data frame sent I1116 10:18:23.258058 7 log.go:181] (0xc003248e70) (0xc002f66820) Stream removed, broadcasting: 1 I1116 10:18:23.258074 7 log.go:181] (0xc003248e70) Go away received I1116 10:18:23.258263 7 log.go:181] (0xc003248e70) (0xc002f66820) Stream removed, broadcasting: 1 I1116 10:18:23.258303 7 log.go:181] (0xc003248e70) (0xc003415860) Stream removed, broadcasting: 3 I1116 10:18:23.258322 7 log.go:181] (0xc003248e70) (0xc003415900) Stream removed, broadcasting: 5 Nov 16 10:18:23.258: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:18:23.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5820" for this suite. • [SLOW TEST:24.635 seconds] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":229,"skipped":3937,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:18:23.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:18:23.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8" in namespace "projected-1005" to be "Succeeded or Failed" Nov 16 10:18:23.470: INFO: Pod "downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 44.121335ms Nov 16 10:18:25.473: INFO: Pod "downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046853646s Nov 16 10:18:27.478: INFO: Pod "downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8": Phase="Running", Reason="", readiness=true. Elapsed: 4.05134143s Nov 16 10:18:29.525: INFO: Pod "downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098745895s STEP: Saw pod success Nov 16 10:18:29.525: INFO: Pod "downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8" satisfied condition "Succeeded or Failed" Nov 16 10:18:29.527: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8 container client-container: STEP: delete the pod Nov 16 10:18:29.664: INFO: Waiting for pod downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8 to disappear Nov 16 10:18:29.676: INFO: Pod downwardapi-volume-b663d6d8-c4c5-433f-800a-31b892fdbef8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:18:29.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1005" for this suite. • [SLOW TEST:6.415 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":230,"skipped":3948,"failed":0} [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:18:29.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:18:30.352: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Nov 16 10:18:30.401: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 16 10:18:35.434: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 16 10:18:35.435: INFO: Creating deployment "test-rolling-update-deployment" Nov 16 10:18:35.441: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Nov 16 10:18:35.460: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Nov 16 10:18:37.469: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Nov 16 10:18:37.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118715, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118715, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118715, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118715, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:18:39.494: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 16 10:18:39.501: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1379 /apis/apps/v1/namespaces/deployment-1379/deployments/test-rolling-update-deployment 57c86613-dbdd-4472-964b-47d98c1f041f 9793783 1 2020-11-16 10:18:35 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-11-16 10:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-16 10:18:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003635f78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-11-16 10:18:35 +0000 UTC,LastTransitionTime:2020-11-16 10:18:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-11-16 10:18:38 +0000 UTC,LastTransitionTime:2020-11-16 10:18:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 16 10:18:39.503: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-1379 /apis/apps/v1/namespaces/deployment-1379/replicasets/test-rolling-update-deployment-c4cb8d6d9 b4af30a7-2390-4b43-81ca-6d45f44163f5 9793772 1 2020-11-16 10:18:35 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 57c86613-dbdd-4472-964b-47d98c1f041f 0xc00666a6f0 0xc00666a6f1}] [] [{kube-controller-manager Update apps/v1 2020-11-16 10:18:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"57c86613-dbdd-4472-964b-47d98c1f041f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00666a768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:18:39.503: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Nov 16 10:18:39.503: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1379 /apis/apps/v1/namespaces/deployment-1379/replicasets/test-rolling-update-controller 2febc72d-304e-4b4b-bb94-3777fb029812 9793782 2 2020-11-16 10:18:30 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 57c86613-dbdd-4472-964b-47d98c1f041f 0xc00666a5e7 0xc00666a5e8}] [] [{e2e.test Update apps/v1 2020-11-16 10:18:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-16 10:18:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"57c86613-dbdd-4472-964b-47d98c1f041f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00666a688 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 16 10:18:39.506: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-mr7rr" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-mr7rr test-rolling-update-deployment-c4cb8d6d9- deployment-1379 /api/v1/namespaces/deployment-1379/pods/test-rolling-update-deployment-c4cb8d6d9-mr7rr 974bc976-13ac-4121-93f9-c0c5ee10e1ce 9793771 0 2020-11-16 10:18:35 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 b4af30a7-2390-4b43-81ca-6d45f44163f5 0xc00666ac00 0xc00666ac01}] [] [{kube-controller-manager Update v1 2020-11-16 10:18:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b4af30a7-2390-4b43-81ca-6d45f44163f5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-16 10:18:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.218\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gn8lt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gn8lt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gn8lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:18:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-16 10:18:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.218,StartTime:2020-11-16 10:18:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-16 10:18:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://774b64a5374bc4305751718562566d9f21474ebbcbe039cfa5c1e9090fd17eac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:18:39.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1379" for this suite. • [SLOW TEST:9.830 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":231,"skipped":3948,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:18:39.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:18:39.703: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Nov 16 10:18:39.729: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:39.739: INFO: Number of nodes with available pods: 0 Nov 16 10:18:39.739: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:18:40.743: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:40.746: INFO: Number of nodes with available pods: 0 Nov 16 10:18:40.746: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:18:41.870: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:41.883: INFO: Number of nodes with available pods: 0 Nov 16 10:18:41.883: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:18:42.765: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:42.800: INFO: Number of nodes with available pods: 0 Nov 16 10:18:42.800: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:18:43.747: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:43.750: INFO: Number of nodes with available pods: 1 Nov 16 10:18:43.750: INFO: Node latest-worker2 is running more than one daemon pod Nov 16 10:18:44.777: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:44.785: INFO: Number of nodes with available pods: 2 Nov 16 10:18:44.785: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Nov 16 10:18:45.016: INFO: Wrong image for pod: daemon-set-4xr66. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:45.016: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:45.020: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:46.070: INFO: Wrong image for pod: daemon-set-4xr66. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:46.070: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:46.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:47.025: INFO: Wrong image for pod: daemon-set-4xr66. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:47.025: INFO: Pod daemon-set-4xr66 is not available Nov 16 10:18:47.025: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:47.029: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:48.025: INFO: Pod daemon-set-6lrrp is not available Nov 16 10:18:48.025: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:48.030: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:49.046: INFO: Pod daemon-set-6lrrp is not available Nov 16 10:18:49.046: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:49.050: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:50.024: INFO: Pod daemon-set-6lrrp is not available Nov 16 10:18:50.024: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:50.028: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:51.026: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:51.232: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:52.026: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:52.030: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:53.025: INFO: Wrong image for pod: daemon-set-xrbw6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 16 10:18:53.025: INFO: Pod daemon-set-xrbw6 is not available Nov 16 10:18:53.029: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:54.025: INFO: Pod daemon-set-jtkjt is not available Nov 16 10:18:54.030: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Nov 16 10:18:54.034: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:54.037: INFO: Number of nodes with available pods: 1 Nov 16 10:18:54.037: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:18:55.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:55.047: INFO: Number of nodes with available pods: 1 Nov 16 10:18:55.047: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:18:56.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:56.047: INFO: Number of nodes with available pods: 1 Nov 16 10:18:56.047: INFO: Node latest-worker is running more than one daemon pod Nov 16 10:18:57.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 16 10:18:57.046: INFO: Number of nodes with available pods: 2 Nov 16 10:18:57.047: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7316, will wait for the garbage collector to delete the pods Nov 16 10:18:57.121: INFO: Deleting DaemonSet.extensions daemon-set took: 6.43685ms Nov 16 10:18:57.521: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.272706ms Nov 16 10:19:05.727: INFO: Number of nodes with available pods: 0 Nov 16 10:19:05.727: INFO: Number of running nodes: 0, number of available pods: 0 Nov 16 10:19:05.730: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7316/daemonsets","resourceVersion":"9793964"},"items":null} Nov 16 10:19:05.752: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7316/pods","resourceVersion":"9793964"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:19:05.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7316" for this suite. • [SLOW TEST:26.270 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":232,"skipped":3951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:19:05.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Nov 16 10:19:10.362: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8915 pod-service-account-215f4a56-b4e9-4d02-b19c-cc59757e7b13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Nov 16 10:19:13.964: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8915 pod-service-account-215f4a56-b4e9-4d02-b19c-cc59757e7b13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Nov 16 10:19:14.180: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8915 pod-service-account-215f4a56-b4e9-4d02-b19c-cc59757e7b13 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:19:14.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8915" for this suite. • [SLOW TEST:8.624 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":233,"skipped":3997,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:19:14.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:19:14.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Nov 16 10:19:15.073: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-16T10:19:15Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-16T10:19:15Z]] name:name1 resourceVersion:9794050 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59027023-9ec2-40e6-a5f2-ff65d5e32241] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Nov 16 10:19:25.078: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-16T10:19:25Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-16T10:19:25Z]] name:name2 resourceVersion:9794095 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5a27e77a-d52e-4f1e-9eb4-92be4e338302] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Nov 16 10:19:35.092: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-16T10:19:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-16T10:19:35Z]] name:name1 resourceVersion:9794123 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59027023-9ec2-40e6-a5f2-ff65d5e32241] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Nov 16 10:19:45.100: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-16T10:19:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-16T10:19:45Z]] name:name2 resourceVersion:9794152 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5a27e77a-d52e-4f1e-9eb4-92be4e338302] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Nov 16 10:19:55.110: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-16T10:19:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-16T10:19:35Z]] name:name1 resourceVersion:9794182 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:59027023-9ec2-40e6-a5f2-ff65d5e32241] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Nov 16 10:20:05.120: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-16T10:19:25Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-16T10:19:45Z]] name:name2 resourceVersion:9794212 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5a27e77a-d52e-4f1e-9eb4-92be4e338302] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:20:15.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4487" for this suite. • [SLOW TEST:61.244 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":234,"skipped":4005,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:20:15.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-1351 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1351 STEP: Deleting pre-stop pod Nov 16 10:20:28.800: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:20:28.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1351" for this suite. • [SLOW TEST:13.240 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":235,"skipped":4013,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:20:28.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:20:46.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1358" for this suite. • [SLOW TEST:17.336 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":236,"skipped":4022,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:20:46.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:20:46.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5730" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":237,"skipped":4041,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:20:46.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:21:02.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2296" for this suite. • [SLOW TEST:16.351 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":238,"skipped":4058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:21:02.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Nov 16 10:21:02.772: INFO: Major version: 1 STEP: Confirm minor version Nov 16 10:21:02.772: INFO: cleanMinorVersion: 19 Nov 16 10:21:02.772: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:21:02.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-3832" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":239,"skipped":4090,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:21:02.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Nov 16 10:21:02.861: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:21:08.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9521" for this suite. • [SLOW TEST:6.219 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":240,"skipped":4106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:21:09.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Nov 16 10:21:09.082: INFO: Pod name pod-release: Found 0 pods out of 1 Nov 16 10:21:14.090: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:21:14.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-72" for this suite. • [SLOW TEST:5.216 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":241,"skipped":4139,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:21:14.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4664 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4664;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4664 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4664;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4664.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4664.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4664.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4664.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4664.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4664.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4664.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4664.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4664.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4664.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4664.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.77.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.77.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.77.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.77.165_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4664 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4664;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4664 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4664;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4664.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4664.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4664.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4664.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4664.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4664.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4664.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4664.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4664.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4664.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4664.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4664.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.77.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.77.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.77.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.77.165_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 10:21:20.545: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.550: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.554: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.557: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.560: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.565: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.567: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.586: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.589: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.592: INFO: Unable to read jessie_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.595: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.598: INFO: Unable to read jessie_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.605: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:20.846: INFO: Lookups using dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4664 wheezy_tcp@dns-test-service.dns-4664 wheezy_udp@dns-test-service.dns-4664.svc wheezy_tcp@dns-test-service.dns-4664.svc wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4664 jessie_tcp@dns-test-service.dns-4664 jessie_udp@dns-test-service.dns-4664.svc jessie_tcp@dns-test-service.dns-4664.svc jessie_udp@_http._tcp.dns-test-service.dns-4664.svc jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc] Nov 16 10:21:25.851: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.854: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.857: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.860: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.863: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.866: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.869: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.871: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.893: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.896: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.898: INFO: Unable to read jessie_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.901: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.903: INFO: Unable to read jessie_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.906: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.909: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.911: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:25.930: INFO: Lookups using dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4664 wheezy_tcp@dns-test-service.dns-4664 wheezy_udp@dns-test-service.dns-4664.svc wheezy_tcp@dns-test-service.dns-4664.svc wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4664 jessie_tcp@dns-test-service.dns-4664 jessie_udp@dns-test-service.dns-4664.svc jessie_tcp@dns-test-service.dns-4664.svc jessie_udp@_http._tcp.dns-test-service.dns-4664.svc jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc] Nov 16 10:21:30.851: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.854: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.857: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.860: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.865: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.869: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.872: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.875: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.894: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.897: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.901: INFO: Unable to read jessie_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.904: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.908: INFO: Unable to read jessie_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.911: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.914: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.917: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:30.935: INFO: Lookups using dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4664 wheezy_tcp@dns-test-service.dns-4664 wheezy_udp@dns-test-service.dns-4664.svc wheezy_tcp@dns-test-service.dns-4664.svc wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4664 jessie_tcp@dns-test-service.dns-4664 jessie_udp@dns-test-service.dns-4664.svc jessie_tcp@dns-test-service.dns-4664.svc jessie_udp@_http._tcp.dns-test-service.dns-4664.svc jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc] Nov 16 10:21:35.852: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.857: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.861: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.864: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.867: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.871: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.875: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.878: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.902: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.905: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.909: INFO: Unable to read jessie_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.912: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.915: INFO: Unable to read jessie_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.918: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.921: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.924: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:35.941: INFO: Lookups using dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4664 wheezy_tcp@dns-test-service.dns-4664 wheezy_udp@dns-test-service.dns-4664.svc wheezy_tcp@dns-test-service.dns-4664.svc wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4664 jessie_tcp@dns-test-service.dns-4664 jessie_udp@dns-test-service.dns-4664.svc jessie_tcp@dns-test-service.dns-4664.svc jessie_udp@_http._tcp.dns-test-service.dns-4664.svc jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc] Nov 16 10:21:40.871: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.874: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.877: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.881: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.884: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.887: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.890: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.892: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.911: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.914: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.917: INFO: Unable to read jessie_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.919: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.922: INFO: Unable to read jessie_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.925: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.927: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.930: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:40.950: INFO: Lookups using dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4664 wheezy_tcp@dns-test-service.dns-4664 wheezy_udp@dns-test-service.dns-4664.svc wheezy_tcp@dns-test-service.dns-4664.svc wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4664 jessie_tcp@dns-test-service.dns-4664 jessie_udp@dns-test-service.dns-4664.svc jessie_tcp@dns-test-service.dns-4664.svc jessie_udp@_http._tcp.dns-test-service.dns-4664.svc jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc] Nov 16 10:21:46.759: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:46.829: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:46.833: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:46.950: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:46.985: INFO: Unable to read wheezy_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:46.989: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:46.993: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:46.997: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.020: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.023: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.026: INFO: Unable to read jessie_udp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664 from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.032: INFO: Unable to read jessie_udp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.035: INFO: Unable to read jessie_tcp@dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.039: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc from pod dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3: the server could not find the requested resource (get pods dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3) Nov 16 10:21:47.071: INFO: Lookups using dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4664 wheezy_tcp@dns-test-service.dns-4664 wheezy_udp@dns-test-service.dns-4664.svc wheezy_tcp@dns-test-service.dns-4664.svc wheezy_udp@_http._tcp.dns-test-service.dns-4664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4664 jessie_tcp@dns-test-service.dns-4664 jessie_udp@dns-test-service.dns-4664.svc jessie_tcp@dns-test-service.dns-4664.svc jessie_udp@_http._tcp.dns-test-service.dns-4664.svc jessie_tcp@_http._tcp.dns-test-service.dns-4664.svc] Nov 16 10:21:50.983: INFO: DNS probes using dns-4664/dns-test-db14f090-e425-4ef8-8992-6e4a0764eef3 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:21:51.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4664" for this suite. • [SLOW TEST:37.596 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":242,"skipped":4158,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:21:51.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:21:51.892: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:21:53.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8581" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":243,"skipped":4172,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:21:53.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:21:53.342: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-db2a16ac-69cd-4b27-9dfe-8a637f0019b4" in namespace "security-context-test-9514" to be "Succeeded or Failed" Nov 16 10:21:53.345: INFO: Pod "busybox-privileged-false-db2a16ac-69cd-4b27-9dfe-8a637f0019b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.338573ms Nov 16 10:21:55.351: INFO: Pod "busybox-privileged-false-db2a16ac-69cd-4b27-9dfe-8a637f0019b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008432489s Nov 16 10:21:57.355: INFO: Pod "busybox-privileged-false-db2a16ac-69cd-4b27-9dfe-8a637f0019b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012912608s Nov 16 10:21:57.355: INFO: Pod "busybox-privileged-false-db2a16ac-69cd-4b27-9dfe-8a637f0019b4" satisfied condition "Succeeded or Failed" Nov 16 10:21:57.509: INFO: Got logs for pod "busybox-privileged-false-db2a16ac-69cd-4b27-9dfe-8a637f0019b4": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:21:57.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9514" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":4190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:21:57.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-4383b074-a03b-4ef3-aad9-f0828280ebdd STEP: Creating a pod to test consume secrets Nov 16 10:21:57.759: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bafa6e07-88b2-439d-b445-445975733d86" in namespace "projected-8626" to be "Succeeded or Failed" Nov 16 10:21:57.762: INFO: Pod "pod-projected-secrets-bafa6e07-88b2-439d-b445-445975733d86": Phase="Pending", Reason="", readiness=false. Elapsed: 3.143362ms Nov 16 10:21:59.766: INFO: Pod "pod-projected-secrets-bafa6e07-88b2-439d-b445-445975733d86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006857995s Nov 16 10:22:01.771: INFO: Pod "pod-projected-secrets-bafa6e07-88b2-439d-b445-445975733d86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011505984s STEP: Saw pod success Nov 16 10:22:01.771: INFO: Pod "pod-projected-secrets-bafa6e07-88b2-439d-b445-445975733d86" satisfied condition "Succeeded or Failed" Nov 16 10:22:01.781: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-bafa6e07-88b2-439d-b445-445975733d86 container projected-secret-volume-test: STEP: delete the pod Nov 16 10:22:01.812: INFO: Waiting for pod pod-projected-secrets-bafa6e07-88b2-439d-b445-445975733d86 to disappear Nov 16 10:22:01.823: INFO: Pod pod-projected-secrets-bafa6e07-88b2-439d-b445-445975733d86 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:22:01.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8626" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":245,"skipped":4216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:22:01.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-a29ece89-f011-42d0-aa85-bd3b4edcb504 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-a29ece89-f011-42d0-aa85-bd3b4edcb504 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:22:08.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1360" for this suite. • [SLOW TEST:6.184 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":246,"skipped":4243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:22:08.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 10:22:08.573: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 10:22:10.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118928, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118928, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118928, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741118928, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 10:22:13.614: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:22:13.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5084" for this suite. STEP: Destroying namespace "webhook-5084-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.690 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":247,"skipped":4269,"failed":0} SSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:22:13.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:22:13.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7246" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":248,"skipped":4273,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:22:13.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:22:14.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a940ad0-6466-4e7b-a33e-ec5490fc3fbf" in namespace "projected-3707" to be "Succeeded or Failed" Nov 16 10:22:14.257: INFO: Pod "downwardapi-volume-0a940ad0-6466-4e7b-a33e-ec5490fc3fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.321967ms Nov 16 10:22:16.262: INFO: Pod "downwardapi-volume-0a940ad0-6466-4e7b-a33e-ec5490fc3fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043086788s Nov 16 10:22:18.267: INFO: Pod "downwardapi-volume-0a940ad0-6466-4e7b-a33e-ec5490fc3fbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048416686s STEP: Saw pod success Nov 16 10:22:18.267: INFO: Pod "downwardapi-volume-0a940ad0-6466-4e7b-a33e-ec5490fc3fbf" satisfied condition "Succeeded or Failed" Nov 16 10:22:18.271: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0a940ad0-6466-4e7b-a33e-ec5490fc3fbf container client-container: STEP: delete the pod Nov 16 10:22:18.347: INFO: Waiting for pod downwardapi-volume-0a940ad0-6466-4e7b-a33e-ec5490fc3fbf to disappear Nov 16 10:22:18.357: INFO: Pod downwardapi-volume-0a940ad0-6466-4e7b-a33e-ec5490fc3fbf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:22:18.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3707" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":249,"skipped":4280,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:22:18.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-7f8ca33f-7ab6-4b14-9588-5b265c734185 Nov 16 10:22:18.431: INFO: Pod name my-hostname-basic-7f8ca33f-7ab6-4b14-9588-5b265c734185: Found 0 pods out of 1 Nov 16 10:22:23.448: INFO: Pod name my-hostname-basic-7f8ca33f-7ab6-4b14-9588-5b265c734185: Found 1 pods out of 1 Nov 16 10:22:23.448: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7f8ca33f-7ab6-4b14-9588-5b265c734185" are running Nov 16 10:22:23.451: INFO: Pod "my-hostname-basic-7f8ca33f-7ab6-4b14-9588-5b265c734185-n2g9z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-16 10:22:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-16 10:22:21 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-16 10:22:21 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-16 10:22:18 +0000 UTC Reason: Message:}]) Nov 16 10:22:23.452: INFO: Trying to dial the pod Nov 16 10:22:28.460: INFO: Controller my-hostname-basic-7f8ca33f-7ab6-4b14-9588-5b265c734185: Got expected result from replica 1 [my-hostname-basic-7f8ca33f-7ab6-4b14-9588-5b265c734185-n2g9z]: "my-hostname-basic-7f8ca33f-7ab6-4b14-9588-5b265c734185-n2g9z", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:22:28.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6068" for this suite. • [SLOW TEST:10.102 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":250,"skipped":4284,"failed":0} SSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:22:28.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Nov 16 10:22:28.582: INFO: created test-pod-1 Nov 16 10:22:28.587: INFO: created test-pod-2 Nov 16 10:22:28.613: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:22:28.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1290" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":251,"skipped":4287,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:22:28.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Nov 16 10:22:37.582: INFO: Successfully updated pod "labelsupdate2d140706-be62-45b0-ae3c-c9575d73e6d0" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:22:39.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9051" for this suite. • [SLOW TEST:10.776 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":252,"skipped":4297,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:22:39.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3906 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 16 10:22:39.770: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 16 10:22:39.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 10:22:41.872: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 10:22:43.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:22:45.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:22:47.874: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:22:49.872: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:22:51.872: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:22:53.872: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:22:55.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:22:57.871: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 16 10:22:57.875: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 16 10:23:03.999: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.237 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3906 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 10:23:03.999: INFO: >>> kubeConfig: /root/.kube/config I1116 10:23:04.035103 7 log.go:181] (0xc00003b970) (0xc0036fe460) Create stream I1116 10:23:04.035135 7 log.go:181] (0xc00003b970) (0xc0036fe460) Stream added, broadcasting: 1 I1116 10:23:04.037076 7 log.go:181] (0xc00003b970) Reply frame received for 1 I1116 10:23:04.037104 7 log.go:181] (0xc00003b970) (0xc003432fa0) Create stream I1116 10:23:04.037113 7 log.go:181] (0xc00003b970) (0xc003432fa0) Stream added, broadcasting: 3 I1116 10:23:04.038084 7 log.go:181] (0xc00003b970) Reply frame received for 3 I1116 10:23:04.038123 7 log.go:181] (0xc00003b970) (0xc003433040) Create stream I1116 10:23:04.038141 7 log.go:181] (0xc00003b970) (0xc003433040) Stream added, broadcasting: 5 I1116 10:23:04.039176 7 log.go:181] (0xc00003b970) Reply frame received for 5 I1116 10:23:05.132240 7 log.go:181] (0xc00003b970) Data frame received for 5 I1116 10:23:05.132274 7 log.go:181] (0xc003433040) (5) Data frame handling I1116 10:23:05.132294 7 log.go:181] (0xc00003b970) Data frame received for 3 I1116 10:23:05.132307 7 log.go:181] (0xc003432fa0) (3) Data frame handling I1116 10:23:05.132315 7 log.go:181] (0xc003432fa0) (3) Data frame sent I1116 10:23:05.132323 7 log.go:181] (0xc00003b970) Data frame received for 3 I1116 10:23:05.132329 7 log.go:181] (0xc003432fa0) (3) Data frame handling I1116 10:23:05.134191 7 log.go:181] (0xc00003b970) Data frame received for 1 I1116 10:23:05.134226 7 log.go:181] (0xc0036fe460) (1) Data frame handling I1116 10:23:05.134257 7 log.go:181] (0xc0036fe460) (1) Data frame sent I1116 10:23:05.134366 7 log.go:181] (0xc00003b970) (0xc0036fe460) Stream removed, broadcasting: 1 I1116 10:23:05.134424 7 log.go:181] (0xc00003b970) Go away received I1116 10:23:05.134488 7 log.go:181] (0xc00003b970) (0xc0036fe460) Stream removed, broadcasting: 1 I1116 10:23:05.134520 7 log.go:181] (0xc00003b970) (0xc003432fa0) Stream removed, broadcasting: 3 I1116 10:23:05.134537 7 log.go:181] (0xc00003b970) (0xc003433040) Stream removed, broadcasting: 5 Nov 16 10:23:05.134: INFO: Found all expected endpoints: [netserver-0] Nov 16 10:23:05.138: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.29 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3906 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 10:23:05.138: INFO: >>> kubeConfig: /root/.kube/config I1116 10:23:05.282962 7 log.go:181] (0xc003249810) (0xc0035852c0) Create stream I1116 10:23:05.283051 7 log.go:181] (0xc003249810) (0xc0035852c0) Stream added, broadcasting: 1 I1116 10:23:05.284806 7 log.go:181] (0xc003249810) Reply frame received for 1 I1116 10:23:05.284935 7 log.go:181] (0xc003249810) (0xc0034330e0) Create stream I1116 10:23:05.284952 7 log.go:181] (0xc003249810) (0xc0034330e0) Stream added, broadcasting: 3 I1116 10:23:05.285830 7 log.go:181] (0xc003249810) Reply frame received for 3 I1116 10:23:05.285876 7 log.go:181] (0xc003249810) (0xc003433180) Create stream I1116 10:23:05.285886 7 log.go:181] (0xc003249810) (0xc003433180) Stream added, broadcasting: 5 I1116 10:23:05.286479 7 log.go:181] (0xc003249810) Reply frame received for 5 I1116 10:23:06.387950 7 log.go:181] (0xc003249810) Data frame received for 5 I1116 10:23:06.387989 7 log.go:181] (0xc003433180) (5) Data frame handling I1116 10:23:06.388038 7 log.go:181] (0xc003249810) Data frame received for 3 I1116 10:23:06.388063 7 log.go:181] (0xc0034330e0) (3) Data frame handling I1116 10:23:06.388109 7 log.go:181] (0xc0034330e0) (3) Data frame sent I1116 10:23:06.388133 7 log.go:181] (0xc003249810) Data frame received for 3 I1116 10:23:06.388143 7 log.go:181] (0xc0034330e0) (3) Data frame handling I1116 10:23:06.390469 7 log.go:181] (0xc003249810) Data frame received for 1 I1116 10:23:06.390552 7 log.go:181] (0xc0035852c0) (1) Data frame handling I1116 10:23:06.390582 7 log.go:181] (0xc0035852c0) (1) Data frame sent I1116 10:23:06.390604 7 log.go:181] (0xc003249810) (0xc0035852c0) Stream removed, broadcasting: 1 I1116 10:23:06.390630 7 log.go:181] (0xc003249810) Go away received I1116 10:23:06.390740 7 log.go:181] (0xc003249810) (0xc0035852c0) Stream removed, broadcasting: 1 I1116 10:23:06.390765 7 log.go:181] (0xc003249810) (0xc0034330e0) Stream removed, broadcasting: 3 I1116 10:23:06.390775 7 log.go:181] (0xc003249810) (0xc003433180) Stream removed, broadcasting: 5 Nov 16 10:23:06.390: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:23:06.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3906" for this suite. • [SLOW TEST:26.775 seconds] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":253,"skipped":4317,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:23:06.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 16 10:23:11.103: INFO: Successfully updated pod "pod-update-4b1a92e0-915b-4d94-9e7d-440d5d36af1d" STEP: verifying the updated pod is in kubernetes Nov 16 10:23:11.125: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:23:11.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6357" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":254,"skipped":4358,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:23:11.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1116 10:23:51.497070 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 16 10:24:53.518: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Nov 16 10:24:53.518: INFO: Deleting pod "simpletest.rc-6kkgr" in namespace "gc-6291" Nov 16 10:24:53.558: INFO: Deleting pod "simpletest.rc-9fnv4" in namespace "gc-6291" Nov 16 10:24:53.638: INFO: Deleting pod "simpletest.rc-cswgs" in namespace "gc-6291" Nov 16 10:24:53.682: INFO: Deleting pod "simpletest.rc-dr8k8" in namespace "gc-6291" Nov 16 10:24:54.243: INFO: Deleting pod "simpletest.rc-lkjfg" in namespace "gc-6291" Nov 16 10:24:54.456: INFO: Deleting pod "simpletest.rc-lvbjj" in namespace "gc-6291" Nov 16 10:24:54.886: INFO: Deleting pod "simpletest.rc-mtkmv" in namespace "gc-6291" Nov 16 10:24:54.944: INFO: Deleting pod "simpletest.rc-vn2n7" in namespace "gc-6291" Nov 16 10:24:55.366: INFO: Deleting pod "simpletest.rc-vttwp" in namespace "gc-6291" Nov 16 10:24:55.533: INFO: Deleting pod "simpletest.rc-zpcnj" in namespace "gc-6291" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:24:55.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6291" for this suite. • [SLOW TEST:104.801 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":255,"skipped":4377,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:24:55.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-50888f30-b0f4-485d-b301-ded5d65c1120 in namespace container-probe-6592 Nov 16 10:25:00.419: INFO: Started pod liveness-50888f30-b0f4-485d-b301-ded5d65c1120 in namespace container-probe-6592 STEP: checking the pod's current state and verifying that restartCount is present Nov 16 10:25:00.421: INFO: Initial restart count of pod liveness-50888f30-b0f4-485d-b301-ded5d65c1120 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:29:01.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6592" for this suite. • [SLOW TEST:245.147 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4381,"failed":0} [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:29:01.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 16 10:29:01.594: INFO: starting watch STEP: patching STEP: updating Nov 16 10:29:01.651: INFO: waiting for watch events with expected annotations Nov 16 10:29:01.651: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:29:01.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-5407" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":257,"skipped":4381,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:29:01.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-0ea2e069-68d6-4e41-aa99-e82821dd3039 in namespace container-probe-9028 Nov 16 10:29:06.033: INFO: Started pod busybox-0ea2e069-68d6-4e41-aa99-e82821dd3039 in namespace container-probe-9028 STEP: checking the pod's current state and verifying that restartCount is present Nov 16 10:29:06.037: INFO: Initial restart count of pod busybox-0ea2e069-68d6-4e41-aa99-e82821dd3039 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:33:07.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9028" for this suite. • [SLOW TEST:245.765 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4400,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:33:07.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Nov 16 10:33:07.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3717' Nov 16 10:33:11.378: INFO: stderr: "" Nov 16 10:33:11.378: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Nov 16 10:33:16.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3717 -o json' Nov 16 10:33:16.534: INFO: stderr: "" Nov 16 10:33:16.534: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-11-16T10:33:11Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-11-16T10:33:11Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.247\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-11-16T10:33:14Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3717\",\n \"resourceVersion\": \"9797416\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3717/pods/e2e-test-httpd-pod\",\n \"uid\": \"e37ed5c9-c04e-4f2b-a1b7-51082801b8fc\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-j8kl7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-j8kl7\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-j8kl7\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-16T10:33:11Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-16T10:33:14Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-16T10:33:14Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-16T10:33:11Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://7ac7e8f80375a987f09c0dcd58ee11742ff7ad04b043c3cbc532b68db7c00142\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-11-16T10:33:13Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.247\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.247\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-11-16T10:33:11Z\"\n }\n}\n" STEP: replace the image in the pod Nov 16 10:33:16.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3717' Nov 16 10:33:16.939: INFO: stderr: "" Nov 16 10:33:16.939: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Nov 16 10:33:16.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3717' Nov 16 10:33:20.082: INFO: stderr: "" Nov 16 10:33:20.082: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:33:20.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3717" for this suite. • [SLOW TEST:12.631 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":259,"skipped":4405,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:33:20.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Nov 16 10:33:20.233: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-954 /api/v1/namespaces/watch-954/configmaps/e2e-watch-test-label-changed 8aa4ce00-e888-4c15-a4c6-d63534e31fc7 9797450 0 2020-11-16 10:33:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-16 10:33:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 10:33:20.233: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-954 /api/v1/namespaces/watch-954/configmaps/e2e-watch-test-label-changed 8aa4ce00-e888-4c15-a4c6-d63534e31fc7 9797451 0 2020-11-16 10:33:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-16 10:33:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 10:33:20.233: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-954 /api/v1/namespaces/watch-954/configmaps/e2e-watch-test-label-changed 8aa4ce00-e888-4c15-a4c6-d63534e31fc7 9797452 0 2020-11-16 10:33:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-16 10:33:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Nov 16 10:33:30.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-954 /api/v1/namespaces/watch-954/configmaps/e2e-watch-test-label-changed 8aa4ce00-e888-4c15-a4c6-d63534e31fc7 9797493 0 2020-11-16 10:33:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-16 10:33:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 10:33:30.292: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-954 /api/v1/namespaces/watch-954/configmaps/e2e-watch-test-label-changed 8aa4ce00-e888-4c15-a4c6-d63534e31fc7 9797494 0 2020-11-16 10:33:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-16 10:33:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 16 10:33:30.292: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-954 /api/v1/namespaces/watch-954/configmaps/e2e-watch-test-label-changed 8aa4ce00-e888-4c15-a4c6-d63534e31fc7 9797495 0 2020-11-16 10:33:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-16 10:33:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:33:30.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-954" for this suite. • [SLOW TEST:10.232 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":260,"skipped":4410,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:33:30.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Nov 16 10:33:30.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3152' Nov 16 10:33:30.675: INFO: stderr: "" Nov 16 10:33:30.675: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 16 10:33:30.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3152' Nov 16 10:33:30.868: INFO: stderr: "" Nov 16 10:33:30.868: INFO: stdout: "update-demo-nautilus-bp7pm update-demo-nautilus-tmnz6 " Nov 16 10:33:30.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp7pm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3152' Nov 16 10:33:30.991: INFO: stderr: "" Nov 16 10:33:30.991: INFO: stdout: "" Nov 16 10:33:30.991: INFO: update-demo-nautilus-bp7pm is created but not running Nov 16 10:33:35.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3152' Nov 16 10:33:36.102: INFO: stderr: "" Nov 16 10:33:36.102: INFO: stdout: "update-demo-nautilus-bp7pm update-demo-nautilus-tmnz6 " Nov 16 10:33:36.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp7pm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3152' Nov 16 10:33:36.211: INFO: stderr: "" Nov 16 10:33:36.211: INFO: stdout: "true" Nov 16 10:33:36.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp7pm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3152' Nov 16 10:33:36.309: INFO: stderr: "" Nov 16 10:33:36.309: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 16 10:33:36.309: INFO: validating pod update-demo-nautilus-bp7pm Nov 16 10:33:36.323: INFO: got data: { "image": "nautilus.jpg" } Nov 16 10:33:36.323: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 16 10:33:36.323: INFO: update-demo-nautilus-bp7pm is verified up and running Nov 16 10:33:36.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tmnz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3152' Nov 16 10:33:36.425: INFO: stderr: "" Nov 16 10:33:36.425: INFO: stdout: "true" Nov 16 10:33:36.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tmnz6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3152' Nov 16 10:33:36.537: INFO: stderr: "" Nov 16 10:33:36.537: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 16 10:33:36.537: INFO: validating pod update-demo-nautilus-tmnz6 Nov 16 10:33:36.542: INFO: got data: { "image": "nautilus.jpg" } Nov 16 10:33:36.542: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 16 10:33:36.542: INFO: update-demo-nautilus-tmnz6 is verified up and running STEP: using delete to clean up resources Nov 16 10:33:36.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3152' Nov 16 10:33:36.640: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 10:33:36.640: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 16 10:33:36.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3152' Nov 16 10:33:36.748: INFO: stderr: "No resources found in kubectl-3152 namespace.\n" Nov 16 10:33:36.748: INFO: stdout: "" Nov 16 10:33:36.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3152 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 16 10:33:36.873: INFO: stderr: "" Nov 16 10:33:36.873: INFO: stdout: "update-demo-nautilus-bp7pm\nupdate-demo-nautilus-tmnz6\n" Nov 16 10:33:37.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3152' Nov 16 10:33:37.492: INFO: stderr: "No resources found in kubectl-3152 namespace.\n" Nov 16 10:33:37.492: INFO: stdout: "" Nov 16 10:33:37.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3152 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 16 10:33:37.610: INFO: stderr: "" Nov 16 10:33:37.610: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:33:37.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3152" for this suite. • [SLOW TEST:7.297 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":261,"skipped":4414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:33:37.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:33:41.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8391" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4438,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:33:41.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-0306b6a4-3662-450e-b5c6-1e5a404c1cfe STEP: Creating a pod to test consume configMaps Nov 16 10:33:42.083: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-afd3bb32-d7a0-4481-8997-988b23293221" in namespace "projected-529" to be "Succeeded or Failed" Nov 16 10:33:42.105: INFO: Pod "pod-projected-configmaps-afd3bb32-d7a0-4481-8997-988b23293221": Phase="Pending", Reason="", readiness=false. Elapsed: 21.781262ms Nov 16 10:33:44.239: INFO: Pod "pod-projected-configmaps-afd3bb32-d7a0-4481-8997-988b23293221": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156009368s Nov 16 10:33:46.244: INFO: Pod "pod-projected-configmaps-afd3bb32-d7a0-4481-8997-988b23293221": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160414558s STEP: Saw pod success Nov 16 10:33:46.244: INFO: Pod "pod-projected-configmaps-afd3bb32-d7a0-4481-8997-988b23293221" satisfied condition "Succeeded or Failed" Nov 16 10:33:46.246: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-afd3bb32-d7a0-4481-8997-988b23293221 container projected-configmap-volume-test: STEP: delete the pod Nov 16 10:33:46.265: INFO: Waiting for pod pod-projected-configmaps-afd3bb32-d7a0-4481-8997-988b23293221 to disappear Nov 16 10:33:46.284: INFO: Pod pod-projected-configmaps-afd3bb32-d7a0-4481-8997-988b23293221 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:33:46.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-529" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:33:46.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 16 10:33:51.536: INFO: Successfully updated pod "pod-update-activedeadlineseconds-35c5efca-9d5c-4f18-b1f7-2a1125b9c11c" Nov 16 10:33:51.536: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-35c5efca-9d5c-4f18-b1f7-2a1125b9c11c" in namespace "pods-6867" to be "terminated due to deadline exceeded" Nov 16 10:33:51.567: INFO: Pod "pod-update-activedeadlineseconds-35c5efca-9d5c-4f18-b1f7-2a1125b9c11c": Phase="Running", Reason="", readiness=true. Elapsed: 31.041769ms Nov 16 10:33:53.573: INFO: Pod "pod-update-activedeadlineseconds-35c5efca-9d5c-4f18-b1f7-2a1125b9c11c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.036390516s Nov 16 10:33:53.573: INFO: Pod "pod-update-activedeadlineseconds-35c5efca-9d5c-4f18-b1f7-2a1125b9c11c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:33:53.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6867" for this suite. • [SLOW TEST:7.292 seconds] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":264,"skipped":4486,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:33:53.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 16 10:34:01.765: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 16 10:34:01.774: INFO: Pod pod-with-poststart-http-hook still exists Nov 16 10:34:03.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 16 10:34:03.779: INFO: Pod pod-with-poststart-http-hook still exists Nov 16 10:34:05.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 16 10:34:05.778: INFO: Pod pod-with-poststart-http-hook still exists Nov 16 10:34:07.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 16 10:34:07.778: INFO: Pod pod-with-poststart-http-hook still exists Nov 16 10:34:09.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 16 10:34:09.778: INFO: Pod pod-with-poststart-http-hook still exists Nov 16 10:34:11.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 16 10:34:11.779: INFO: Pod pod-with-poststart-http-hook still exists Nov 16 10:34:13.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 16 10:34:13.779: INFO: Pod pod-with-poststart-http-hook still exists Nov 16 10:34:15.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 16 10:34:15.778: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:34:15.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2362" for this suite. • [SLOW TEST:22.201 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":265,"skipped":4488,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:34:15.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Nov 16 10:34:23.480: INFO: Successfully updated pod "labelsupdate6cda9236-22c5-4dd4-a1f3-9b6a893e2719" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:34:25.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3959" for this suite. • [SLOW TEST:9.933 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4493,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:34:25.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 16 10:34:25.852: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 16 10:34:25.860: INFO: Waiting for terminating namespaces to be deleted... Nov 16 10:34:25.862: INFO: Logging pods the apiserver thinks is on node latest-worker before test Nov 16 10:34:25.868: INFO: pod-handle-http-request from container-lifecycle-hook-2362 started at 2020-11-16 10:33:53 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.868: INFO: Container pod-handle-http-request ready: false, restart count 0 Nov 16 10:34:25.868: INFO: labelsupdate6cda9236-22c5-4dd4-a1f3-9b6a893e2719 from downward-api-3959 started at 2020-11-16 10:34:15 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.868: INFO: Container client-container ready: true, restart count 0 Nov 16 10:34:25.868: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.868: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 10:34:25.868: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.868: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 10:34:25.868: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Nov 16 10:34:25.874: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.874: INFO: Container coredns ready: true, restart count 0 Nov 16 10:34:25.874: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.874: INFO: Container coredns ready: true, restart count 0 Nov 16 10:34:25.874: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.874: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 10:34:25.874: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.874: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 10:34:25.874: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 10:34:25.874: INFO: Container local-path-provisioner ready: true, restart count 1 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-18b22cba-dd77-4480-b8bf-42c23c2bcd68 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-18b22cba-dd77-4480-b8bf-42c23c2bcd68 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-18b22cba-dd77-4480-b8bf-42c23c2bcd68 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:34:34.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9529" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.692 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":267,"skipped":4497,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:34:34.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:34:34.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7526" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":268,"skipped":4513,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:34:34.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-72cb350b-7fb2-40d7-8a44-a25f894cc69f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:34:40.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1195" for this suite. • [SLOW TEST:6.136 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4535,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:34:40.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-3660/secret-test-944e4111-6c2c-4ce8-b66c-f111e66f7e46 STEP: Creating a pod to test consume secrets Nov 16 10:34:40.711: INFO: Waiting up to 5m0s for pod "pod-configmaps-eaa0eab7-2eaa-4d9a-8fd7-82e207886510" in namespace "secrets-3660" to be "Succeeded or Failed" Nov 16 10:34:40.716: INFO: Pod "pod-configmaps-eaa0eab7-2eaa-4d9a-8fd7-82e207886510": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613429ms Nov 16 10:34:42.719: INFO: Pod "pod-configmaps-eaa0eab7-2eaa-4d9a-8fd7-82e207886510": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008544065s Nov 16 10:34:44.724: INFO: Pod "pod-configmaps-eaa0eab7-2eaa-4d9a-8fd7-82e207886510": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013018617s STEP: Saw pod success Nov 16 10:34:44.724: INFO: Pod "pod-configmaps-eaa0eab7-2eaa-4d9a-8fd7-82e207886510" satisfied condition "Succeeded or Failed" Nov 16 10:34:44.732: INFO: Trying to get logs from node latest-worker pod pod-configmaps-eaa0eab7-2eaa-4d9a-8fd7-82e207886510 container env-test: STEP: delete the pod Nov 16 10:34:44.765: INFO: Waiting for pod pod-configmaps-eaa0eab7-2eaa-4d9a-8fd7-82e207886510 to disappear Nov 16 10:34:44.781: INFO: Pod pod-configmaps-eaa0eab7-2eaa-4d9a-8fd7-82e207886510 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:34:44.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3660" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4548,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:34:44.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 16 10:34:45.330: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 16 10:34:45.343: INFO: Waiting for terminating namespaces to be deleted... Nov 16 10:34:45.346: INFO: Logging pods the apiserver thinks is on node latest-worker before test Nov 16 10:34:45.351: INFO: pod-configmaps-489b840e-4285-4a89-bfab-1deadfd8ad2e from configmap-1195 started at 2020-11-16 10:34:34 +0000 UTC (2 container statuses recorded) Nov 16 10:34:45.352: INFO: Container configmap-volume-binary-test ready: false, restart count 0 Nov 16 10:34:45.352: INFO: Container configmap-volume-data-test ready: true, restart count 0 Nov 16 10:34:45.352: INFO: labelsupdate6cda9236-22c5-4dd4-a1f3-9b6a893e2719 from downward-api-3959 started at 2020-11-16 10:34:15 +0000 UTC (1 container statuses recorded) Nov 16 10:34:45.352: INFO: Container client-container ready: false, restart count 0 Nov 16 10:34:45.352: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 10:34:45.352: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 10:34:45.352: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 10:34:45.352: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 10:34:45.352: INFO: with-labels from sched-pred-9529 started at 2020-11-16 10:34:30 +0000 UTC (1 container statuses recorded) Nov 16 10:34:45.352: INFO: Container with-labels ready: false, restart count 0 Nov 16 10:34:45.352: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Nov 16 10:34:46.079: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Nov 16 10:34:46.079: INFO: Container coredns ready: true, restart count 0 Nov 16 10:34:46.079: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 10:34:46.080: INFO: Container coredns ready: true, restart count 0 Nov 16 10:34:46.080: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Nov 16 10:34:46.080: INFO: Container kindnet-cni ready: true, restart count 0 Nov 16 10:34:46.080: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Nov 16 10:34:46.080: INFO: Container kube-proxy ready: true, restart count 0 Nov 16 10:34:46.080: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Nov 16 10:34:46.080: INFO: Container local-path-provisioner ready: true, restart count 1 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1647f68f00ce1a53], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1647f68f06abd28d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:34:47.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4774" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":271,"skipped":4564,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:34:47.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Nov 16 10:34:47.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1544' Nov 16 10:34:47.767: INFO: stderr: "" Nov 16 10:34:47.767: INFO: stdout: "pod/pause created\n" Nov 16 10:34:47.767: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Nov 16 10:34:47.767: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1544" to be "running and ready" Nov 16 10:34:47.815: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 48.419559ms Nov 16 10:34:49.818: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05157647s Nov 16 10:34:51.824: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.056860791s Nov 16 10:34:51.824: INFO: Pod "pause" satisfied condition "running and ready" Nov 16 10:34:51.824: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Nov 16 10:34:51.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1544' Nov 16 10:34:51.926: INFO: stderr: "" Nov 16 10:34:51.926: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Nov 16 10:34:51.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1544' Nov 16 10:34:52.025: INFO: stderr: "" Nov 16 10:34:52.025: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Nov 16 10:34:52.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1544' Nov 16 10:34:52.151: INFO: stderr: "" Nov 16 10:34:52.151: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Nov 16 10:34:52.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1544' Nov 16 10:34:52.255: INFO: stderr: "" Nov 16 10:34:52.255: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Nov 16 10:34:52.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1544' Nov 16 10:34:52.419: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 16 10:34:52.419: INFO: stdout: "pod \"pause\" force deleted\n" Nov 16 10:34:52.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1544' Nov 16 10:34:52.536: INFO: stderr: "No resources found in kubectl-1544 namespace.\n" Nov 16 10:34:52.536: INFO: stdout: "" Nov 16 10:34:52.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1544 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 16 10:34:52.750: INFO: stderr: "" Nov 16 10:34:52.751: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:34:52.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1544" for this suite. • [SLOW TEST:5.849 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":272,"skipped":4564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:34:53.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:36:53.157: INFO: Deleting pod "var-expansion-b8ea0e2e-4d40-41fd-9235-a817247e8146" in namespace "var-expansion-3532" Nov 16 10:36:53.163: INFO: Wait up to 5m0s for pod "var-expansion-b8ea0e2e-4d40-41fd-9235-a817247e8146" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:36:55.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3532" for this suite. • [SLOW TEST:122.225 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":273,"skipped":4590,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:36:55.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-2bf6fc8b-e332-40c4-9ebd-7db61e08f263 STEP: Creating configMap with name cm-test-opt-upd-b7d9ce1a-ae7c-4c5f-9bb6-baa460eb6bbc STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2bf6fc8b-e332-40c4-9ebd-7db61e08f263 STEP: Updating configmap cm-test-opt-upd-b7d9ce1a-ae7c-4c5f-9bb6-baa460eb6bbc STEP: Creating configMap with name cm-test-opt-create-3e812049-4851-4d0c-90d1-f65e79100c97 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:38:21.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4309" for this suite. • [SLOW TEST:86.594 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4603,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:38:21.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0eab7ccb-68e5-41ec-8d29-ead006d1482d STEP: Creating a pod to test consume secrets Nov 16 10:38:21.925: INFO: Waiting up to 5m0s for pod "pod-secrets-1111983a-36a0-4d85-8de4-993abbf0737e" in namespace "secrets-5653" to be "Succeeded or Failed" Nov 16 10:38:21.932: INFO: Pod "pod-secrets-1111983a-36a0-4d85-8de4-993abbf0737e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.937183ms Nov 16 10:38:23.937: INFO: Pod "pod-secrets-1111983a-36a0-4d85-8de4-993abbf0737e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01215589s Nov 16 10:38:25.942: INFO: Pod "pod-secrets-1111983a-36a0-4d85-8de4-993abbf0737e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016832333s STEP: Saw pod success Nov 16 10:38:25.942: INFO: Pod "pod-secrets-1111983a-36a0-4d85-8de4-993abbf0737e" satisfied condition "Succeeded or Failed" Nov 16 10:38:25.945: INFO: Trying to get logs from node latest-worker pod pod-secrets-1111983a-36a0-4d85-8de4-993abbf0737e container secret-volume-test: STEP: delete the pod Nov 16 10:38:25.981: INFO: Waiting for pod pod-secrets-1111983a-36a0-4d85-8de4-993abbf0737e to disappear Nov 16 10:38:25.992: INFO: Pod pod-secrets-1111983a-36a0-4d85-8de4-993abbf0737e no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:38:25.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5653" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":275,"skipped":4624,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:38:26.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 16 10:38:31.491: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:38:31.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6819" for this suite. • [SLOW TEST:5.535 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":276,"skipped":4635,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:38:31.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:39:31.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1698" for this suite. • [SLOW TEST:60.111 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4640,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:39:31.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2320 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Nov 16 10:39:31.797: INFO: Found 0 stateful pods, waiting for 3 Nov 16 10:39:41.802: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 16 10:39:41.802: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 16 10:39:41.802: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 16 10:39:51.802: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 16 10:39:51.802: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 16 10:39:51.802: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Nov 16 10:39:51.833: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Nov 16 10:40:01.954: INFO: Updating stateful set ss2 Nov 16 10:40:02.016: INFO: Waiting for Pod statefulset-2320/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 16 10:40:12.024: INFO: Waiting for Pod statefulset-2320/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Nov 16 10:40:22.590: INFO: Found 2 stateful pods, waiting for 3 Nov 16 10:40:32.597: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 16 10:40:32.597: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 16 10:40:32.597: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Nov 16 10:40:32.620: INFO: Updating stateful set ss2 Nov 16 10:40:32.680: INFO: Waiting for Pod statefulset-2320/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 16 10:40:42.689: INFO: Waiting for Pod statefulset-2320/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 16 10:40:52.714: INFO: Updating stateful set ss2 Nov 16 10:40:52.798: INFO: Waiting for StatefulSet statefulset-2320/ss2 to complete update Nov 16 10:40:52.798: INFO: Waiting for Pod statefulset-2320/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 16 10:41:02.809: INFO: Waiting for StatefulSet statefulset-2320/ss2 to complete update Nov 16 10:41:02.809: INFO: Waiting for Pod statefulset-2320/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 16 10:41:12.804: INFO: Deleting all statefulset in ns statefulset-2320 Nov 16 10:41:12.806: INFO: Scaling statefulset ss2 to 0 Nov 16 10:41:42.849: INFO: Waiting for statefulset status.replicas updated to 0 Nov 16 10:41:42.853: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:41:42.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2320" for this suite. • [SLOW TEST:131.241 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":278,"skipped":4657,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:41:42.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Nov 16 10:41:43.010: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:41:43.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8699" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":279,"skipped":4665,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:41:43.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:41:52.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-535" for this suite. • [SLOW TEST:9.277 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":280,"skipped":4692,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:41:52.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 16 10:41:56.667: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:41:56.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8239" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:41:56.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:42:31.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2871" for this suite. • [SLOW TEST:34.153 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":282,"skipped":4720,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:42:31.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-321c59bb-7da5-4afa-a9fd-444dddf4390e STEP: Creating a pod to test consume secrets Nov 16 10:42:31.197: INFO: Waiting up to 5m0s for pod "pod-secrets-718aaa00-2b6d-4d45-b55e-b134bbb39227" in namespace "secrets-7582" to be "Succeeded or Failed" Nov 16 10:42:31.203: INFO: Pod "pod-secrets-718aaa00-2b6d-4d45-b55e-b134bbb39227": Phase="Pending", Reason="", readiness=false. Elapsed: 5.6983ms Nov 16 10:42:33.234: INFO: Pod "pod-secrets-718aaa00-2b6d-4d45-b55e-b134bbb39227": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036923261s Nov 16 10:42:35.240: INFO: Pod "pod-secrets-718aaa00-2b6d-4d45-b55e-b134bbb39227": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042615499s STEP: Saw pod success Nov 16 10:42:35.240: INFO: Pod "pod-secrets-718aaa00-2b6d-4d45-b55e-b134bbb39227" satisfied condition "Succeeded or Failed" Nov 16 10:42:35.243: INFO: Trying to get logs from node latest-worker pod pod-secrets-718aaa00-2b6d-4d45-b55e-b134bbb39227 container secret-volume-test: STEP: delete the pod Nov 16 10:42:35.361: INFO: Waiting for pod pod-secrets-718aaa00-2b6d-4d45-b55e-b134bbb39227 to disappear Nov 16 10:42:35.389: INFO: Pod pod-secrets-718aaa00-2b6d-4d45-b55e-b134bbb39227 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:42:35.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7582" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":283,"skipped":4743,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:42:35.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Nov 16 10:42:35.589: INFO: Waiting up to 5m0s for pod "client-containers-fb73f1b7-1099-48f9-bbd0-327f302d734c" in namespace "containers-3974" to be "Succeeded or Failed" Nov 16 10:42:35.617: INFO: Pod "client-containers-fb73f1b7-1099-48f9-bbd0-327f302d734c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.025985ms Nov 16 10:42:37.765: INFO: Pod "client-containers-fb73f1b7-1099-48f9-bbd0-327f302d734c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176604856s Nov 16 10:42:39.769: INFO: Pod "client-containers-fb73f1b7-1099-48f9-bbd0-327f302d734c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180457363s STEP: Saw pod success Nov 16 10:42:39.769: INFO: Pod "client-containers-fb73f1b7-1099-48f9-bbd0-327f302d734c" satisfied condition "Succeeded or Failed" Nov 16 10:42:39.772: INFO: Trying to get logs from node latest-worker pod client-containers-fb73f1b7-1099-48f9-bbd0-327f302d734c container test-container: STEP: delete the pod Nov 16 10:42:39.828: INFO: Waiting for pod client-containers-fb73f1b7-1099-48f9-bbd0-327f302d734c to disappear Nov 16 10:42:39.909: INFO: Pod client-containers-fb73f1b7-1099-48f9-bbd0-327f302d734c no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:42:39.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3974" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4744,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:42:39.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7556 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 16 10:42:39.977: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 16 10:42:40.113: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 10:42:42.127: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 10:42:44.220: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 16 10:42:48.551: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:42:50.124: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 16 10:42:52.125: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 16 10:42:52.130: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:42:54.135: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:42:56.135: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:42:58.136: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:43:00.134: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:43:02.135: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:43:04.134: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:43:06.135: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:43:08.134: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 16 10:43:10.134: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 16 10:43:14.211: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.25:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7556 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 10:43:14.211: INFO: >>> kubeConfig: /root/.kube/config I1116 10:43:14.251447 7 log.go:181] (0xc0068d04d0) (0xc0039f8780) Create stream I1116 10:43:14.251477 7 log.go:181] (0xc0068d04d0) (0xc0039f8780) Stream added, broadcasting: 1 I1116 10:43:14.253950 7 log.go:181] (0xc0068d04d0) Reply frame received for 1 I1116 10:43:14.254010 7 log.go:181] (0xc0068d04d0) (0xc0039f8960) Create stream I1116 10:43:14.254030 7 log.go:181] (0xc0068d04d0) (0xc0039f8960) Stream added, broadcasting: 3 I1116 10:43:14.255179 7 log.go:181] (0xc0068d04d0) Reply frame received for 3 I1116 10:43:14.255219 7 log.go:181] (0xc0068d04d0) (0xc0039f8aa0) Create stream I1116 10:43:14.255234 7 log.go:181] (0xc0068d04d0) (0xc0039f8aa0) Stream added, broadcasting: 5 I1116 10:43:14.256197 7 log.go:181] (0xc0068d04d0) Reply frame received for 5 I1116 10:43:14.332493 7 log.go:181] (0xc0068d04d0) Data frame received for 3 I1116 10:43:14.332553 7 log.go:181] (0xc0039f8960) (3) Data frame handling I1116 10:43:14.332572 7 log.go:181] (0xc0039f8960) (3) Data frame sent I1116 10:43:14.332603 7 log.go:181] (0xc0068d04d0) Data frame received for 3 I1116 10:43:14.332615 7 log.go:181] (0xc0039f8960) (3) Data frame handling I1116 10:43:14.332664 7 log.go:181] (0xc0068d04d0) Data frame received for 5 I1116 10:43:14.332720 7 log.go:181] (0xc0039f8aa0) (5) Data frame handling I1116 10:43:14.334483 7 log.go:181] (0xc0068d04d0) Data frame received for 1 I1116 10:43:14.334515 7 log.go:181] (0xc0039f8780) (1) Data frame handling I1116 10:43:14.334551 7 log.go:181] (0xc0039f8780) (1) Data frame sent I1116 10:43:14.334579 7 log.go:181] (0xc0068d04d0) (0xc0039f8780) Stream removed, broadcasting: 1 I1116 10:43:14.334609 7 log.go:181] (0xc0068d04d0) Go away received I1116 10:43:14.334722 7 log.go:181] (0xc0068d04d0) (0xc0039f8780) Stream removed, broadcasting: 1 I1116 10:43:14.334742 7 log.go:181] (0xc0068d04d0) (0xc0039f8960) Stream removed, broadcasting: 3 I1116 10:43:14.334755 7 log.go:181] (0xc0068d04d0) (0xc0039f8aa0) Stream removed, broadcasting: 5 Nov 16 10:43:14.334: INFO: Found all expected endpoints: [netserver-0] Nov 16 10:43:14.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.38:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7556 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 16 10:43:14.338: INFO: >>> kubeConfig: /root/.kube/config I1116 10:43:14.370181 7 log.go:181] (0xc0068d0b00) (0xc0039f8e60) Create stream I1116 10:43:14.370205 7 log.go:181] (0xc0068d0b00) (0xc0039f8e60) Stream added, broadcasting: 1 I1116 10:43:14.375955 7 log.go:181] (0xc0068d0b00) Reply frame received for 1 I1116 10:43:14.376035 7 log.go:181] (0xc0068d0b00) (0xc003c16000) Create stream I1116 10:43:14.376120 7 log.go:181] (0xc0068d0b00) (0xc003c16000) Stream added, broadcasting: 3 I1116 10:43:14.378624 7 log.go:181] (0xc0068d0b00) Reply frame received for 3 I1116 10:43:14.378672 7 log.go:181] (0xc0068d0b00) (0xc0036bcb40) Create stream I1116 10:43:14.378687 7 log.go:181] (0xc0068d0b00) (0xc0036bcb40) Stream added, broadcasting: 5 I1116 10:43:14.379693 7 log.go:181] (0xc0068d0b00) Reply frame received for 5 I1116 10:43:14.456937 7 log.go:181] (0xc0068d0b00) Data frame received for 3 I1116 10:43:14.456975 7 log.go:181] (0xc003c16000) (3) Data frame handling I1116 10:43:14.457001 7 log.go:181] (0xc003c16000) (3) Data frame sent I1116 10:43:14.457253 7 log.go:181] (0xc0068d0b00) Data frame received for 5 I1116 10:43:14.457282 7 log.go:181] (0xc0036bcb40) (5) Data frame handling I1116 10:43:14.457309 7 log.go:181] (0xc0068d0b00) Data frame received for 3 I1116 10:43:14.457323 7 log.go:181] (0xc003c16000) (3) Data frame handling I1116 10:43:14.459455 7 log.go:181] (0xc0068d0b00) Data frame received for 1 I1116 10:43:14.459479 7 log.go:181] (0xc0039f8e60) (1) Data frame handling I1116 10:43:14.459499 7 log.go:181] (0xc0039f8e60) (1) Data frame sent I1116 10:43:14.459516 7 log.go:181] (0xc0068d0b00) (0xc0039f8e60) Stream removed, broadcasting: 1 I1116 10:43:14.459532 7 log.go:181] (0xc0068d0b00) Go away received I1116 10:43:14.459621 7 log.go:181] (0xc0068d0b00) (0xc0039f8e60) Stream removed, broadcasting: 1 I1116 10:43:14.459644 7 log.go:181] (0xc0068d0b00) (0xc003c16000) Stream removed, broadcasting: 3 I1116 10:43:14.459656 7 log.go:181] (0xc0068d0b00) (0xc0036bcb40) Stream removed, broadcasting: 5 Nov 16 10:43:14.459: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:43:14.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7556" for this suite. • [SLOW TEST:34.541 seconds] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4752,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:43:14.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-e99dc7a5-3ca3-48b8-86ea-b2bc51cea81a STEP: Creating a pod to test consume configMaps Nov 16 10:43:14.573: INFO: Waiting up to 5m0s for pod "pod-configmaps-e37d09f3-c67b-4ff6-9f21-19952cbc4cde" in namespace "configmap-4801" to be "Succeeded or Failed" Nov 16 10:43:14.589: INFO: Pod "pod-configmaps-e37d09f3-c67b-4ff6-9f21-19952cbc4cde": Phase="Pending", Reason="", readiness=false. Elapsed: 15.832217ms Nov 16 10:43:16.593: INFO: Pod "pod-configmaps-e37d09f3-c67b-4ff6-9f21-19952cbc4cde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020073359s Nov 16 10:43:18.622: INFO: Pod "pod-configmaps-e37d09f3-c67b-4ff6-9f21-19952cbc4cde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049345609s STEP: Saw pod success Nov 16 10:43:18.622: INFO: Pod "pod-configmaps-e37d09f3-c67b-4ff6-9f21-19952cbc4cde" satisfied condition "Succeeded or Failed" Nov 16 10:43:18.654: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e37d09f3-c67b-4ff6-9f21-19952cbc4cde container configmap-volume-test: STEP: delete the pod Nov 16 10:43:18.774: INFO: Waiting for pod pod-configmaps-e37d09f3-c67b-4ff6-9f21-19952cbc4cde to disappear Nov 16 10:43:18.780: INFO: Pod pod-configmaps-e37d09f3-c67b-4ff6-9f21-19952cbc4cde no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:43:18.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4801" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4754,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:43:18.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Nov 16 10:43:18.902: INFO: Waiting up to 5m0s for pod "client-containers-ad0e8434-a2ce-4dd3-abfa-55b53c7479aa" in namespace "containers-4471" to be "Succeeded or Failed" Nov 16 10:43:18.920: INFO: Pod "client-containers-ad0e8434-a2ce-4dd3-abfa-55b53c7479aa": Phase="Pending", Reason="", readiness=false. Elapsed: 16.972178ms Nov 16 10:43:21.167: INFO: Pod "client-containers-ad0e8434-a2ce-4dd3-abfa-55b53c7479aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264752753s Nov 16 10:43:23.172: INFO: Pod "client-containers-ad0e8434-a2ce-4dd3-abfa-55b53c7479aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.269919575s STEP: Saw pod success Nov 16 10:43:23.173: INFO: Pod "client-containers-ad0e8434-a2ce-4dd3-abfa-55b53c7479aa" satisfied condition "Succeeded or Failed" Nov 16 10:43:23.176: INFO: Trying to get logs from node latest-worker pod client-containers-ad0e8434-a2ce-4dd3-abfa-55b53c7479aa container test-container: STEP: delete the pod Nov 16 10:43:23.206: INFO: Waiting for pod client-containers-ad0e8434-a2ce-4dd3-abfa-55b53c7479aa to disappear Nov 16 10:43:23.218: INFO: Pod client-containers-ad0e8434-a2ce-4dd3-abfa-55b53c7479aa no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:43:23.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4471" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":287,"skipped":4762,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:43:23.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Nov 16 10:43:31.395: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:31.398: INFO: Pod pod-with-prestop-http-hook still exists Nov 16 10:43:33.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:33.403: INFO: Pod pod-with-prestop-http-hook still exists Nov 16 10:43:35.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:35.402: INFO: Pod pod-with-prestop-http-hook still exists Nov 16 10:43:37.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:37.404: INFO: Pod pod-with-prestop-http-hook still exists Nov 16 10:43:39.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:39.404: INFO: Pod pod-with-prestop-http-hook still exists Nov 16 10:43:41.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:41.404: INFO: Pod pod-with-prestop-http-hook still exists Nov 16 10:43:43.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:43.404: INFO: Pod pod-with-prestop-http-hook still exists Nov 16 10:43:45.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:45.403: INFO: Pod pod-with-prestop-http-hook still exists Nov 16 10:43:47.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 16 10:43:47.404: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:43:47.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-762" for this suite. • [SLOW TEST:24.195 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4767,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:43:47.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8253.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8253.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8253.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 10:43:53.540: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:53.543: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:53.547: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:53.550: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:53.561: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:53.564: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:53.568: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:53.571: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:53.578: INFO: Lookups using dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local] Nov 16 10:43:58.593: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:58.596: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:58.598: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:58.601: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:58.609: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:58.611: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:58.613: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:58.615: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:43:58.620: INFO: Lookups using dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local] Nov 16 10:44:03.584: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:03.588: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:03.591: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:03.595: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:03.605: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:03.607: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:03.610: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:03.613: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:03.618: INFO: Lookups using dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local] Nov 16 10:44:08.583: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:08.586: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:08.590: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:08.592: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:08.601: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:08.604: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:08.607: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:08.610: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:08.617: INFO: Lookups using dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local] Nov 16 10:44:13.583: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:13.587: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:13.590: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:13.617: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:13.642: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:13.665: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:13.670: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:13.673: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:13.679: INFO: Lookups using dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local] Nov 16 10:44:18.619: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:18.622: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:18.627: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:18.630: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:18.637: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:18.639: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:18.642: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:18.644: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local from pod dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1: the server could not find the requested resource (get pods dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1) Nov 16 10:44:18.649: INFO: Lookups using dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8253.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8253.svc.cluster.local jessie_udp@dns-test-service-2.dns-8253.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8253.svc.cluster.local] Nov 16 10:44:23.619: INFO: DNS probes using dns-8253/dns-test-6336b2c1-74d2-4941-b361-df2766ee46f1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:44:23.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8253" for this suite. • [SLOW TEST:36.575 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":289,"skipped":4777,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:44:23.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4312 Nov 16 10:44:30.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 16 10:44:33.337: INFO: stderr: "I1116 10:44:33.225508 3921 log.go:181] (0xc000a64000) (0xc0005de000) Create stream\nI1116 10:44:33.225564 3921 log.go:181] (0xc000a64000) (0xc0005de000) Stream added, broadcasting: 1\nI1116 10:44:33.227317 3921 log.go:181] (0xc000a64000) Reply frame received for 1\nI1116 10:44:33.227341 3921 log.go:181] (0xc000a64000) (0xc0005de140) Create stream\nI1116 10:44:33.227347 3921 log.go:181] (0xc000a64000) (0xc0005de140) Stream added, broadcasting: 3\nI1116 10:44:33.228334 3921 log.go:181] (0xc000a64000) Reply frame received for 3\nI1116 10:44:33.228380 3921 log.go:181] (0xc000a64000) (0xc00016c3c0) Create stream\nI1116 10:44:33.228412 3921 log.go:181] (0xc000a64000) (0xc00016c3c0) Stream added, broadcasting: 5\nI1116 10:44:33.229730 3921 log.go:181] (0xc000a64000) Reply frame received for 5\nI1116 10:44:33.319795 3921 log.go:181] (0xc000a64000) Data frame received for 5\nI1116 10:44:33.319825 3921 log.go:181] (0xc00016c3c0) (5) Data frame handling\nI1116 10:44:33.319845 3921 log.go:181] (0xc00016c3c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1116 10:44:33.325409 3921 log.go:181] (0xc000a64000) Data frame received for 3\nI1116 10:44:33.325429 3921 log.go:181] (0xc0005de140) (3) Data frame handling\nI1116 10:44:33.325445 3921 log.go:181] (0xc0005de140) (3) Data frame sent\nI1116 10:44:33.325913 3921 log.go:181] (0xc000a64000) Data frame received for 3\nI1116 10:44:33.325939 3921 log.go:181] (0xc0005de140) (3) Data frame handling\nI1116 10:44:33.326140 3921 log.go:181] (0xc000a64000) Data frame received for 5\nI1116 10:44:33.326160 3921 log.go:181] (0xc00016c3c0) (5) Data frame handling\nI1116 10:44:33.328266 3921 log.go:181] (0xc000a64000) Data frame received for 1\nI1116 10:44:33.328397 3921 log.go:181] (0xc0005de000) (1) Data frame handling\nI1116 10:44:33.328443 3921 log.go:181] (0xc0005de000) (1) Data frame sent\nI1116 10:44:33.328467 3921 log.go:181] (0xc000a64000) (0xc0005de000) Stream removed, broadcasting: 1\nI1116 10:44:33.328501 3921 log.go:181] (0xc000a64000) Go away received\nI1116 10:44:33.329114 3921 log.go:181] (0xc000a64000) (0xc0005de000) Stream removed, broadcasting: 1\nI1116 10:44:33.329132 3921 log.go:181] (0xc000a64000) (0xc0005de140) Stream removed, broadcasting: 3\nI1116 10:44:33.329139 3921 log.go:181] (0xc000a64000) (0xc00016c3c0) Stream removed, broadcasting: 5\n" Nov 16 10:44:33.337: INFO: stdout: "iptables" Nov 16 10:44:33.337: INFO: proxyMode: iptables Nov 16 10:44:33.344: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 16 10:44:33.385: INFO: Pod kube-proxy-mode-detector still exists Nov 16 10:44:35.386: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 16 10:44:35.391: INFO: Pod kube-proxy-mode-detector still exists Nov 16 10:44:37.386: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 16 10:44:37.390: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-4312 STEP: creating replication controller affinity-nodeport-timeout in namespace services-4312 I1116 10:44:37.570087 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4312, replica count: 3 I1116 10:44:40.621132 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 10:44:43.621441 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 10:44:46.621681 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1116 10:44:49.621982 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 16 10:44:49.631: INFO: Creating new exec pod Nov 16 10:44:54.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 execpod-affinity9lkcl -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Nov 16 10:44:54.876: INFO: stderr: "I1116 10:44:54.804915 3939 log.go:181] (0xc000200160) (0xc000852320) Create stream\nI1116 10:44:54.804976 3939 log.go:181] (0xc000200160) (0xc000852320) Stream added, broadcasting: 1\nI1116 10:44:54.806561 3939 log.go:181] (0xc000200160) Reply frame received for 1\nI1116 10:44:54.806609 3939 log.go:181] (0xc000200160) (0xc000170000) Create stream\nI1116 10:44:54.806621 3939 log.go:181] (0xc000200160) (0xc000170000) Stream added, broadcasting: 3\nI1116 10:44:54.807393 3939 log.go:181] (0xc000200160) Reply frame received for 3\nI1116 10:44:54.807448 3939 log.go:181] (0xc000200160) (0xc0001700a0) Create stream\nI1116 10:44:54.807460 3939 log.go:181] (0xc000200160) (0xc0001700a0) Stream added, broadcasting: 5\nI1116 10:44:54.808187 3939 log.go:181] (0xc000200160) Reply frame received for 5\nI1116 10:44:54.865585 3939 log.go:181] (0xc000200160) Data frame received for 5\nI1116 10:44:54.865617 3939 log.go:181] (0xc0001700a0) (5) Data frame handling\nI1116 10:44:54.865634 3939 log.go:181] (0xc0001700a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI1116 10:44:54.865790 3939 log.go:181] (0xc000200160) Data frame received for 5\nI1116 10:44:54.865808 3939 log.go:181] (0xc0001700a0) (5) Data frame handling\nI1116 10:44:54.865822 3939 log.go:181] (0xc0001700a0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI1116 10:44:54.866223 3939 log.go:181] (0xc000200160) Data frame received for 5\nI1116 10:44:54.866243 3939 log.go:181] (0xc0001700a0) (5) Data frame handling\nI1116 10:44:54.866334 3939 log.go:181] (0xc000200160) Data frame received for 3\nI1116 10:44:54.866367 3939 log.go:181] (0xc000170000) (3) Data frame handling\nI1116 10:44:54.868160 3939 log.go:181] (0xc000200160) Data frame received for 1\nI1116 10:44:54.868178 3939 log.go:181] (0xc000852320) (1) Data frame handling\nI1116 10:44:54.868185 3939 log.go:181] (0xc000852320) (1) Data frame sent\nI1116 10:44:54.868194 3939 log.go:181] (0xc000200160) (0xc000852320) Stream removed, broadcasting: 1\nI1116 10:44:54.868270 3939 log.go:181] (0xc000200160) Go away received\nI1116 10:44:54.868472 3939 log.go:181] (0xc000200160) (0xc000852320) Stream removed, broadcasting: 1\nI1116 10:44:54.868484 3939 log.go:181] (0xc000200160) (0xc000170000) Stream removed, broadcasting: 3\nI1116 10:44:54.868488 3939 log.go:181] (0xc000200160) (0xc0001700a0) Stream removed, broadcasting: 5\n" Nov 16 10:44:54.876: INFO: stdout: "" Nov 16 10:44:54.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 execpod-affinity9lkcl -- /bin/sh -x -c nc -zv -t -w 2 10.105.39.241 80' Nov 16 10:44:55.107: INFO: stderr: "I1116 10:44:55.022090 3958 log.go:181] (0xc0009f0000) (0xc00014adc0) Create stream\nI1116 10:44:55.022189 3958 log.go:181] (0xc0009f0000) (0xc00014adc0) Stream added, broadcasting: 1\nI1116 10:44:55.024458 3958 log.go:181] (0xc0009f0000) Reply frame received for 1\nI1116 10:44:55.024533 3958 log.go:181] (0xc0009f0000) (0xc00014b4a0) Create stream\nI1116 10:44:55.024553 3958 log.go:181] (0xc0009f0000) (0xc00014b4a0) Stream added, broadcasting: 3\nI1116 10:44:55.025888 3958 log.go:181] (0xc0009f0000) Reply frame received for 3\nI1116 10:44:55.025935 3958 log.go:181] (0xc0009f0000) (0xc00071a000) Create stream\nI1116 10:44:55.025947 3958 log.go:181] (0xc0009f0000) (0xc00071a000) Stream added, broadcasting: 5\nI1116 10:44:55.026840 3958 log.go:181] (0xc0009f0000) Reply frame received for 5\nI1116 10:44:55.098108 3958 log.go:181] (0xc0009f0000) Data frame received for 5\nI1116 10:44:55.098162 3958 log.go:181] (0xc00071a000) (5) Data frame handling\nI1116 10:44:55.098194 3958 log.go:181] (0xc00071a000) (5) Data frame sent\nI1116 10:44:55.098223 3958 log.go:181] (0xc0009f0000) Data frame received for 5\nI1116 10:44:55.098239 3958 log.go:181] (0xc00071a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.39.241 80\nConnection to 10.105.39.241 80 port [tcp/http] succeeded!\nI1116 10:44:55.098295 3958 log.go:181] (0xc0009f0000) Data frame received for 3\nI1116 10:44:55.098335 3958 log.go:181] (0xc00014b4a0) (3) Data frame handling\nI1116 10:44:55.100135 3958 log.go:181] (0xc0009f0000) Data frame received for 1\nI1116 10:44:55.100168 3958 log.go:181] (0xc00014adc0) (1) Data frame handling\nI1116 10:44:55.100182 3958 log.go:181] (0xc00014adc0) (1) Data frame sent\nI1116 10:44:55.100192 3958 log.go:181] (0xc0009f0000) (0xc00014adc0) Stream removed, broadcasting: 1\nI1116 10:44:55.100205 3958 log.go:181] (0xc0009f0000) Go away received\nI1116 10:44:55.100641 3958 log.go:181] (0xc0009f0000) (0xc00014adc0) Stream removed, broadcasting: 1\nI1116 10:44:55.100666 3958 log.go:181] (0xc0009f0000) (0xc00014b4a0) Stream removed, broadcasting: 3\nI1116 10:44:55.100678 3958 log.go:181] (0xc0009f0000) (0xc00071a000) Stream removed, broadcasting: 5\n" Nov 16 10:44:55.107: INFO: stdout: "" Nov 16 10:44:55.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 execpod-affinity9lkcl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31278' Nov 16 10:44:55.327: INFO: stderr: "I1116 10:44:55.242561 3976 log.go:181] (0xc000e9b290) (0xc0007dc8c0) Create stream\nI1116 10:44:55.242646 3976 log.go:181] (0xc000e9b290) (0xc0007dc8c0) Stream added, broadcasting: 1\nI1116 10:44:55.248108 3976 log.go:181] (0xc000e9b290) Reply frame received for 1\nI1116 10:44:55.248153 3976 log.go:181] (0xc000e9b290) (0xc0007dc000) Create stream\nI1116 10:44:55.248166 3976 log.go:181] (0xc000e9b290) (0xc0007dc000) Stream added, broadcasting: 3\nI1116 10:44:55.249344 3976 log.go:181] (0xc000e9b290) Reply frame received for 3\nI1116 10:44:55.249393 3976 log.go:181] (0xc000e9b290) (0xc0003774a0) Create stream\nI1116 10:44:55.249404 3976 log.go:181] (0xc000e9b290) (0xc0003774a0) Stream added, broadcasting: 5\nI1116 10:44:55.250294 3976 log.go:181] (0xc000e9b290) Reply frame received for 5\nI1116 10:44:55.319537 3976 log.go:181] (0xc000e9b290) Data frame received for 5\nI1116 10:44:55.319659 3976 log.go:181] (0xc0003774a0) (5) Data frame handling\nI1116 10:44:55.319681 3976 log.go:181] (0xc0003774a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 31278\nConnection to 172.18.0.15 31278 port [tcp/31278] succeeded!\nI1116 10:44:55.319718 3976 log.go:181] (0xc000e9b290) Data frame received for 3\nI1116 10:44:55.319756 3976 log.go:181] (0xc0007dc000) (3) Data frame handling\nI1116 10:44:55.319867 3976 log.go:181] (0xc000e9b290) Data frame received for 5\nI1116 10:44:55.319894 3976 log.go:181] (0xc0003774a0) (5) Data frame handling\nI1116 10:44:55.321995 3976 log.go:181] (0xc000e9b290) Data frame received for 1\nI1116 10:44:55.322011 3976 log.go:181] (0xc0007dc8c0) (1) Data frame handling\nI1116 10:44:55.322019 3976 log.go:181] (0xc0007dc8c0) (1) Data frame sent\nI1116 10:44:55.322032 3976 log.go:181] (0xc000e9b290) (0xc0007dc8c0) Stream removed, broadcasting: 1\nI1116 10:44:55.322042 3976 log.go:181] (0xc000e9b290) Go away received\nI1116 10:44:55.322524 3976 log.go:181] (0xc000e9b290) (0xc0007dc8c0) Stream removed, broadcasting: 1\nI1116 10:44:55.322543 3976 log.go:181] (0xc000e9b290) (0xc0007dc000) Stream removed, broadcasting: 3\nI1116 10:44:55.322551 3976 log.go:181] (0xc000e9b290) (0xc0003774a0) Stream removed, broadcasting: 5\n" Nov 16 10:44:55.327: INFO: stdout: "" Nov 16 10:44:55.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 execpod-affinity9lkcl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31278' Nov 16 10:44:55.540: INFO: stderr: "I1116 10:44:55.458969 3994 log.go:181] (0xc0009251e0) (0xc0007d86e0) Create stream\nI1116 10:44:55.459025 3994 log.go:181] (0xc0009251e0) (0xc0007d86e0) Stream added, broadcasting: 1\nI1116 10:44:55.464183 3994 log.go:181] (0xc0009251e0) Reply frame received for 1\nI1116 10:44:55.464223 3994 log.go:181] (0xc0009251e0) (0xc0007d8000) Create stream\nI1116 10:44:55.464235 3994 log.go:181] (0xc0009251e0) (0xc0007d8000) Stream added, broadcasting: 3\nI1116 10:44:55.465287 3994 log.go:181] (0xc0009251e0) Reply frame received for 3\nI1116 10:44:55.465348 3994 log.go:181] (0xc0009251e0) (0xc000cdc0a0) Create stream\nI1116 10:44:55.465372 3994 log.go:181] (0xc0009251e0) (0xc000cdc0a0) Stream added, broadcasting: 5\nI1116 10:44:55.466209 3994 log.go:181] (0xc0009251e0) Reply frame received for 5\nI1116 10:44:55.530991 3994 log.go:181] (0xc0009251e0) Data frame received for 5\nI1116 10:44:55.531022 3994 log.go:181] (0xc000cdc0a0) (5) Data frame handling\nI1116 10:44:55.531036 3994 log.go:181] (0xc000cdc0a0) (5) Data frame sent\nI1116 10:44:55.531042 3994 log.go:181] (0xc0009251e0) Data frame received for 5\nI1116 10:44:55.531051 3994 log.go:181] (0xc000cdc0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31278\nConnection to 172.18.0.14 31278 port [tcp/31278] succeeded!\nI1116 10:44:55.531139 3994 log.go:181] (0xc000cdc0a0) (5) Data frame sent\nI1116 10:44:55.531375 3994 log.go:181] (0xc0009251e0) Data frame received for 5\nI1116 10:44:55.531388 3994 log.go:181] (0xc000cdc0a0) (5) Data frame handling\nI1116 10:44:55.531405 3994 log.go:181] (0xc0009251e0) Data frame received for 3\nI1116 10:44:55.531410 3994 log.go:181] (0xc0007d8000) (3) Data frame handling\nI1116 10:44:55.533331 3994 log.go:181] (0xc0009251e0) Data frame received for 1\nI1116 10:44:55.533342 3994 log.go:181] (0xc0007d86e0) (1) Data frame handling\nI1116 10:44:55.533349 3994 log.go:181] (0xc0007d86e0) (1) Data frame sent\nI1116 10:44:55.533368 3994 log.go:181] (0xc0009251e0) (0xc0007d86e0) Stream removed, broadcasting: 1\nI1116 10:44:55.533590 3994 log.go:181] (0xc0009251e0) Go away received\nI1116 10:44:55.533636 3994 log.go:181] (0xc0009251e0) (0xc0007d86e0) Stream removed, broadcasting: 1\nI1116 10:44:55.533646 3994 log.go:181] (0xc0009251e0) (0xc0007d8000) Stream removed, broadcasting: 3\nI1116 10:44:55.533652 3994 log.go:181] (0xc0009251e0) (0xc000cdc0a0) Stream removed, broadcasting: 5\n" Nov 16 10:44:55.540: INFO: stdout: "" Nov 16 10:44:55.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 execpod-affinity9lkcl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31278/ ; done' Nov 16 10:44:55.849: INFO: stderr: "I1116 10:44:55.676175 4011 log.go:181] (0xc0000b9340) (0xc00083a0a0) Create stream\nI1116 10:44:55.676247 4011 log.go:181] (0xc0000b9340) (0xc00083a0a0) Stream added, broadcasting: 1\nI1116 10:44:55.678249 4011 log.go:181] (0xc0000b9340) Reply frame received for 1\nI1116 10:44:55.678302 4011 log.go:181] (0xc0000b9340) (0xc0008a6000) Create stream\nI1116 10:44:55.678328 4011 log.go:181] (0xc0000b9340) (0xc0008a6000) Stream added, broadcasting: 3\nI1116 10:44:55.679118 4011 log.go:181] (0xc0000b9340) Reply frame received for 3\nI1116 10:44:55.679191 4011 log.go:181] (0xc0000b9340) (0xc0008a6140) Create stream\nI1116 10:44:55.679208 4011 log.go:181] (0xc0000b9340) (0xc0008a6140) Stream added, broadcasting: 5\nI1116 10:44:55.679994 4011 log.go:181] (0xc0000b9340) Reply frame received for 5\nI1116 10:44:55.741646 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.741683 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.741693 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.741715 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.741722 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.741730 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.747237 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.747274 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.747300 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.747945 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.747969 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.747985 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.747996 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.748042 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.748080 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.752414 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.752430 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.752443 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.753327 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.753352 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.753372 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.753396 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.753430 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.753444 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.758540 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.758556 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.758563 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.758949 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.758964 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.758982 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.759038 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.759050 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.759065 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.765761 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.765778 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.765788 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.766542 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.766563 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.766582 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.766598 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\nI1116 10:44:55.766631 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.766639 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.766653 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\nI1116 10:44:55.766676 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.766732 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.771244 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.771277 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.771306 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.772169 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.772200 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.772236 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.772268 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.772293 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.772322 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.777363 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.777395 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.777418 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.777910 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.777944 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.777963 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.777975 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.777984 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.777994 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\nI1116 10:44:55.778003 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.778011 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.778034 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\nI1116 10:44:55.782772 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.782798 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.782834 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.783330 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.783353 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.783361 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.783372 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.783402 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.783418 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.789300 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.789317 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.789324 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.790095 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.790113 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.790125 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.790145 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.790172 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.790201 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.795360 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.795380 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.795465 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.796320 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.796341 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.796383 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.796418 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.796435 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.796459 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.803990 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.804014 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.804033 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.804968 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.804988 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.805007 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\nI1116 10:44:55.805016 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.805028 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.805083 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.805104 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.805115 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.805140 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\nI1116 10:44:55.811488 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.811523 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.811559 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.812451 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.812480 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.812516 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.812559 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\nI1116 10:44:55.812576 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.812587 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.818902 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.818952 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.818976 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.819844 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.819893 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.819934 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.819981 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.820009 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.820042 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.827264 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.827286 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.827315 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.828001 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.828013 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.828019 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.828030 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.828040 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.828053 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.831486 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.831515 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.831533 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.831860 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.831885 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.831896 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.831919 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.831939 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.831953 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:55.835587 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.835634 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.835666 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.835834 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.835854 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.835863 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI1116 10:44:55.835988 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.836008 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.836026 4011 log.go:181] (0xc0008a6140) (5) Data frame sent\n 2 http://172.18.0.15:31278/\nI1116 10:44:55.837189 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.837217 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.837225 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.839558 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.839580 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.839598 4011 log.go:181] (0xc0008a6000) (3) Data frame sent\nI1116 10:44:55.840576 4011 log.go:181] (0xc0000b9340) Data frame received for 3\nI1116 10:44:55.840623 4011 log.go:181] (0xc0008a6000) (3) Data frame handling\nI1116 10:44:55.840649 4011 log.go:181] (0xc0000b9340) Data frame received for 5\nI1116 10:44:55.840667 4011 log.go:181] (0xc0008a6140) (5) Data frame handling\nI1116 10:44:55.842485 4011 log.go:181] (0xc0000b9340) Data frame received for 1\nI1116 10:44:55.842507 4011 log.go:181] (0xc00083a0a0) (1) Data frame handling\nI1116 10:44:55.842529 4011 log.go:181] (0xc00083a0a0) (1) Data frame sent\nI1116 10:44:55.842559 4011 log.go:181] (0xc0000b9340) (0xc00083a0a0) Stream removed, broadcasting: 1\nI1116 10:44:55.842617 4011 log.go:181] (0xc0000b9340) Go away received\nI1116 10:44:55.843004 4011 log.go:181] (0xc0000b9340) (0xc00083a0a0) Stream removed, broadcasting: 1\nI1116 10:44:55.843022 4011 log.go:181] (0xc0000b9340) (0xc0008a6000) Stream removed, broadcasting: 3\nI1116 10:44:55.843033 4011 log.go:181] (0xc0000b9340) (0xc0008a6140) Stream removed, broadcasting: 5\n" Nov 16 10:44:55.850: INFO: stdout: "\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4\naffinity-nodeport-timeout-v7kk4" Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Received response from host: affinity-nodeport-timeout-v7kk4 Nov 16 10:44:55.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 execpod-affinity9lkcl -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:31278/' Nov 16 10:44:56.064: INFO: stderr: "I1116 10:44:55.991417 4029 log.go:181] (0xc000536f20) (0xc00052e500) Create stream\nI1116 10:44:55.991505 4029 log.go:181] (0xc000536f20) (0xc00052e500) Stream added, broadcasting: 1\nI1116 10:44:55.996779 4029 log.go:181] (0xc000536f20) Reply frame received for 1\nI1116 10:44:55.996808 4029 log.go:181] (0xc000536f20) (0xc0004397c0) Create stream\nI1116 10:44:55.996816 4029 log.go:181] (0xc000536f20) (0xc0004397c0) Stream added, broadcasting: 3\nI1116 10:44:55.997597 4029 log.go:181] (0xc000536f20) Reply frame received for 3\nI1116 10:44:55.997621 4029 log.go:181] (0xc000536f20) (0xc00052e000) Create stream\nI1116 10:44:55.997629 4029 log.go:181] (0xc000536f20) (0xc00052e000) Stream added, broadcasting: 5\nI1116 10:44:55.998299 4029 log.go:181] (0xc000536f20) Reply frame received for 5\nI1116 10:44:56.051677 4029 log.go:181] (0xc000536f20) Data frame received for 5\nI1116 10:44:56.051700 4029 log.go:181] (0xc00052e000) (5) Data frame handling\nI1116 10:44:56.051713 4029 log.go:181] (0xc00052e000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:44:56.057072 4029 log.go:181] (0xc000536f20) Data frame received for 3\nI1116 10:44:56.057094 4029 log.go:181] (0xc0004397c0) (3) Data frame handling\nI1116 10:44:56.057114 4029 log.go:181] (0xc0004397c0) (3) Data frame sent\nI1116 10:44:56.057929 4029 log.go:181] (0xc000536f20) Data frame received for 5\nI1116 10:44:56.057948 4029 log.go:181] (0xc00052e000) (5) Data frame handling\nI1116 10:44:56.058093 4029 log.go:181] (0xc000536f20) Data frame received for 3\nI1116 10:44:56.058120 4029 log.go:181] (0xc0004397c0) (3) Data frame handling\nI1116 10:44:56.059763 4029 log.go:181] (0xc000536f20) Data frame received for 1\nI1116 10:44:56.059781 4029 log.go:181] (0xc00052e500) (1) Data frame handling\nI1116 10:44:56.059791 4029 log.go:181] (0xc00052e500) (1) Data frame sent\nI1116 10:44:56.059807 4029 log.go:181] (0xc000536f20) (0xc00052e500) Stream removed, broadcasting: 1\nI1116 10:44:56.060137 4029 log.go:181] (0xc000536f20) (0xc00052e500) Stream removed, broadcasting: 1\nI1116 10:44:56.060153 4029 log.go:181] (0xc000536f20) (0xc0004397c0) Stream removed, broadcasting: 3\nI1116 10:44:56.060161 4029 log.go:181] (0xc000536f20) (0xc00052e000) Stream removed, broadcasting: 5\n" Nov 16 10:44:56.064: INFO: stdout: "affinity-nodeport-timeout-v7kk4" Nov 16 10:45:11.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 execpod-affinity9lkcl -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:31278/' Nov 16 10:45:11.302: INFO: stderr: "I1116 10:45:11.196774 4047 log.go:181] (0xc00003a0b0) (0xc000986500) Create stream\nI1116 10:45:11.196963 4047 log.go:181] (0xc00003a0b0) (0xc000986500) Stream added, broadcasting: 1\nI1116 10:45:11.199287 4047 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1116 10:45:11.199315 4047 log.go:181] (0xc00003a0b0) (0xc00099a000) Create stream\nI1116 10:45:11.199323 4047 log.go:181] (0xc00003a0b0) (0xc00099a000) Stream added, broadcasting: 3\nI1116 10:45:11.200365 4047 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1116 10:45:11.200425 4047 log.go:181] (0xc00003a0b0) (0xc000986aa0) Create stream\nI1116 10:45:11.200443 4047 log.go:181] (0xc00003a0b0) (0xc000986aa0) Stream added, broadcasting: 5\nI1116 10:45:11.201445 4047 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1116 10:45:11.292233 4047 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1116 10:45:11.292265 4047 log.go:181] (0xc000986aa0) (5) Data frame handling\nI1116 10:45:11.292285 4047 log.go:181] (0xc000986aa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:45:11.293995 4047 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1116 10:45:11.294017 4047 log.go:181] (0xc00099a000) (3) Data frame handling\nI1116 10:45:11.294035 4047 log.go:181] (0xc00099a000) (3) Data frame sent\nI1116 10:45:11.294412 4047 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1116 10:45:11.294428 4047 log.go:181] (0xc000986aa0) (5) Data frame handling\nI1116 10:45:11.294561 4047 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1116 10:45:11.294582 4047 log.go:181] (0xc00099a000) (3) Data frame handling\nI1116 10:45:11.296222 4047 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1116 10:45:11.296238 4047 log.go:181] (0xc000986500) (1) Data frame handling\nI1116 10:45:11.296248 4047 log.go:181] (0xc000986500) (1) Data frame sent\nI1116 10:45:11.296390 4047 log.go:181] (0xc00003a0b0) (0xc000986500) Stream removed, broadcasting: 1\nI1116 10:45:11.296674 4047 log.go:181] (0xc00003a0b0) Go away received\nI1116 10:45:11.296727 4047 log.go:181] (0xc00003a0b0) (0xc000986500) Stream removed, broadcasting: 1\nI1116 10:45:11.296756 4047 log.go:181] (0xc00003a0b0) (0xc00099a000) Stream removed, broadcasting: 3\nI1116 10:45:11.296763 4047 log.go:181] (0xc00003a0b0) (0xc000986aa0) Stream removed, broadcasting: 5\n" Nov 16 10:45:11.302: INFO: stdout: "affinity-nodeport-timeout-v7kk4" Nov 16 10:45:26.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4312 execpod-affinity9lkcl -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:31278/' Nov 16 10:45:26.561: INFO: stderr: "I1116 10:45:26.440030 4065 log.go:181] (0xc0005aed10) (0xc00052a5a0) Create stream\nI1116 10:45:26.440084 4065 log.go:181] (0xc0005aed10) (0xc00052a5a0) Stream added, broadcasting: 1\nI1116 10:45:26.442633 4065 log.go:181] (0xc0005aed10) Reply frame received for 1\nI1116 10:45:26.442674 4065 log.go:181] (0xc0005aed10) (0xc000c96640) Create stream\nI1116 10:45:26.442698 4065 log.go:181] (0xc0005aed10) (0xc000c96640) Stream added, broadcasting: 3\nI1116 10:45:26.443702 4065 log.go:181] (0xc0005aed10) Reply frame received for 3\nI1116 10:45:26.443758 4065 log.go:181] (0xc0005aed10) (0xc0001708c0) Create stream\nI1116 10:45:26.443808 4065 log.go:181] (0xc0005aed10) (0xc0001708c0) Stream added, broadcasting: 5\nI1116 10:45:26.444940 4065 log.go:181] (0xc0005aed10) Reply frame received for 5\nI1116 10:45:26.545263 4065 log.go:181] (0xc0005aed10) Data frame received for 5\nI1116 10:45:26.545310 4065 log.go:181] (0xc0001708c0) (5) Data frame handling\nI1116 10:45:26.545350 4065 log.go:181] (0xc0001708c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31278/\nI1116 10:45:26.550828 4065 log.go:181] (0xc0005aed10) Data frame received for 3\nI1116 10:45:26.550866 4065 log.go:181] (0xc000c96640) (3) Data frame handling\nI1116 10:45:26.550903 4065 log.go:181] (0xc000c96640) (3) Data frame sent\nI1116 10:45:26.551464 4065 log.go:181] (0xc0005aed10) Data frame received for 3\nI1116 10:45:26.551483 4065 log.go:181] (0xc000c96640) (3) Data frame handling\nI1116 10:45:26.551708 4065 log.go:181] (0xc0005aed10) Data frame received for 5\nI1116 10:45:26.551718 4065 log.go:181] (0xc0001708c0) (5) Data frame handling\nI1116 10:45:26.553616 4065 log.go:181] (0xc0005aed10) Data frame received for 1\nI1116 10:45:26.553649 4065 log.go:181] (0xc00052a5a0) (1) Data frame handling\nI1116 10:45:26.553677 4065 log.go:181] (0xc00052a5a0) (1) Data frame sent\nI1116 10:45:26.554058 4065 log.go:181] (0xc0005aed10) (0xc00052a5a0) Stream removed, broadcasting: 1\nI1116 10:45:26.554299 4065 log.go:181] (0xc0005aed10) Go away received\nI1116 10:45:26.554449 4065 log.go:181] (0xc0005aed10) (0xc00052a5a0) Stream removed, broadcasting: 1\nI1116 10:45:26.554474 4065 log.go:181] (0xc0005aed10) (0xc000c96640) Stream removed, broadcasting: 3\nI1116 10:45:26.554489 4065 log.go:181] (0xc0005aed10) (0xc0001708c0) Stream removed, broadcasting: 5\n" Nov 16 10:45:26.561: INFO: stdout: "affinity-nodeport-timeout-kkhsg" Nov 16 10:45:26.561: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-4312, will wait for the garbage collector to delete the pods Nov 16 10:45:26.699: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 33.829447ms Nov 16 10:45:27.199: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.202529ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:45:32.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4312" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:68.192 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":290,"skipped":4783,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:45:32.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:45:32.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3547" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":291,"skipped":4788,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:45:32.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-4894628f-d3e9-4d15-a831-6ab266fc5a24 STEP: Creating a pod to test consume configMaps Nov 16 10:45:32.517: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae" in namespace "projected-979" to be "Succeeded or Failed" Nov 16 10:45:32.543: INFO: Pod "pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae": Phase="Pending", Reason="", readiness=false. Elapsed: 26.273652ms Nov 16 10:45:34.549: INFO: Pod "pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031479423s Nov 16 10:45:37.182: INFO: Pod "pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.665386992s Nov 16 10:45:39.188: INFO: Pod "pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.670911584s STEP: Saw pod success Nov 16 10:45:39.188: INFO: Pod "pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae" satisfied condition "Succeeded or Failed" Nov 16 10:45:39.191: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae container projected-configmap-volume-test: STEP: delete the pod Nov 16 10:45:39.236: INFO: Waiting for pod pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae to disappear Nov 16 10:45:39.244: INFO: Pod pod-projected-configmaps-47e0ed77-ca74-4120-92ff-8b4674836cae no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:45:39.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-979" for this suite. • [SLOW TEST:6.839 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4795,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:45:39.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 16 10:45:39.372: INFO: Waiting up to 5m0s for pod "pod-42a907e9-73f3-48fa-a9e3-372c097e69cb" in namespace "emptydir-7570" to be "Succeeded or Failed" Nov 16 10:45:39.382: INFO: Pod "pod-42a907e9-73f3-48fa-a9e3-372c097e69cb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.928353ms Nov 16 10:45:41.386: INFO: Pod "pod-42a907e9-73f3-48fa-a9e3-372c097e69cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013761638s Nov 16 10:45:43.390: INFO: Pod "pod-42a907e9-73f3-48fa-a9e3-372c097e69cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018267181s STEP: Saw pod success Nov 16 10:45:43.390: INFO: Pod "pod-42a907e9-73f3-48fa-a9e3-372c097e69cb" satisfied condition "Succeeded or Failed" Nov 16 10:45:43.393: INFO: Trying to get logs from node latest-worker pod pod-42a907e9-73f3-48fa-a9e3-372c097e69cb container test-container: STEP: delete the pod Nov 16 10:45:43.450: INFO: Waiting for pod pod-42a907e9-73f3-48fa-a9e3-372c097e69cb to disappear Nov 16 10:45:43.475: INFO: Pod pod-42a907e9-73f3-48fa-a9e3-372c097e69cb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:45:43.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7570" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":293,"skipped":4807,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:45:43.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Nov 16 10:45:43.613: INFO: >>> kubeConfig: /root/.kube/config Nov 16 10:45:46.576: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:45:57.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5141" for this suite. • [SLOW TEST:13.925 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":294,"skipped":4809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:45:57.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:45:57.480: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:46:03.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6024" for this suite. • [SLOW TEST:6.341 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":295,"skipped":4848,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:46:03.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8086.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8086.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 16 10:46:09.879: INFO: DNS probes using dns-8086/dns-test-c73c54b4-2dba-43eb-88e5-fc36f88094c8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:46:09.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8086" for this suite. • [SLOW TEST:6.253 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":296,"skipped":4864,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:46:10.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 16 10:46:10.638: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9" in namespace "projected-4415" to be "Succeeded or Failed" Nov 16 10:46:10.653: INFO: Pod "downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.404578ms Nov 16 10:46:12.783: INFO: Pod "downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145082701s Nov 16 10:46:14.788: INFO: Pod "downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149909744s Nov 16 10:46:16.986: INFO: Pod "downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.348499203s Nov 16 10:46:18.997: INFO: Pod "downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.359056361s STEP: Saw pod success Nov 16 10:46:18.997: INFO: Pod "downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9" satisfied condition "Succeeded or Failed" Nov 16 10:46:19.000: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9 container client-container: STEP: delete the pod Nov 16 10:46:19.043: INFO: Waiting for pod downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9 to disappear Nov 16 10:46:19.050: INFO: Pod downwardapi-volume-ad16d292-5ad5-4233-9cb2-ada71e1609b9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:46:19.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4415" for this suite. • [SLOW TEST:9.055 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":297,"skipped":4866,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:46:19.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 16 10:46:19.138: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 16 10:46:21.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6760 create -f -' Nov 16 10:46:24.689: INFO: stderr: "" Nov 16 10:46:24.689: INFO: stdout: "e2e-test-crd-publish-openapi-4522-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Nov 16 10:46:24.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6760 delete e2e-test-crd-publish-openapi-4522-crds test-cr' Nov 16 10:46:24.803: INFO: stderr: "" Nov 16 10:46:24.803: INFO: stdout: "e2e-test-crd-publish-openapi-4522-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Nov 16 10:46:24.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6760 apply -f -' Nov 16 10:46:25.108: INFO: stderr: "" Nov 16 10:46:25.108: INFO: stdout: "e2e-test-crd-publish-openapi-4522-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Nov 16 10:46:25.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6760 delete e2e-test-crd-publish-openapi-4522-crds test-cr' Nov 16 10:46:25.382: INFO: stderr: "" Nov 16 10:46:25.382: INFO: stdout: "e2e-test-crd-publish-openapi-4522-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Nov 16 10:46:25.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4522-crds' Nov 16 10:46:26.974: INFO: stderr: "" Nov 16 10:46:26.974: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4522-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:46:30.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6760" for this suite. • [SLOW TEST:11.376 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":298,"skipped":4868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:46:30.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 16 10:46:30.735: INFO: Waiting up to 5m0s for pod "downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86" in namespace "downward-api-9100" to be "Succeeded or Failed" Nov 16 10:46:30.762: INFO: Pod "downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86": Phase="Pending", Reason="", readiness=false. Elapsed: 26.133435ms Nov 16 10:46:32.765: INFO: Pod "downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030006009s Nov 16 10:46:34.776: INFO: Pod "downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040869496s Nov 16 10:46:36.779: INFO: Pod "downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043979308s STEP: Saw pod success Nov 16 10:46:36.779: INFO: Pod "downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86" satisfied condition "Succeeded or Failed" Nov 16 10:46:36.782: INFO: Trying to get logs from node latest-worker pod downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86 container dapi-container: STEP: delete the pod Nov 16 10:46:36.813: INFO: Waiting for pod downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86 to disappear Nov 16 10:46:36.821: INFO: Pod downward-api-567670d0-489c-4cc4-86ec-af1b1fb05c86 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:46:36.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9100" for this suite. • [SLOW TEST:6.393 seconds] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":299,"skipped":4900,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:46:36.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Nov 16 10:46:45.432: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 16 10:46:45.456: INFO: Pod pod-with-prestop-exec-hook still exists Nov 16 10:46:47.457: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 16 10:46:47.462: INFO: Pod pod-with-prestop-exec-hook still exists Nov 16 10:46:49.457: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 16 10:46:49.461: INFO: Pod pod-with-prestop-exec-hook still exists Nov 16 10:46:51.457: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 16 10:46:51.462: INFO: Pod pod-with-prestop-exec-hook still exists Nov 16 10:46:53.457: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 16 10:46:53.461: INFO: Pod pod-with-prestop-exec-hook still exists Nov 16 10:46:55.456: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 16 10:46:55.461: INFO: Pod pod-with-prestop-exec-hook still exists Nov 16 10:46:57.457: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 16 10:46:57.461: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:46:57.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7526" for this suite. • [SLOW TEST:20.650 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:46:57.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Nov 16 10:46:57.544: INFO: Waiting up to 5m0s for pod "pod-f87a9442-a7ad-436f-9f8a-6be204085ad8" in namespace "emptydir-7409" to be "Succeeded or Failed" Nov 16 10:46:57.584: INFO: Pod "pod-f87a9442-a7ad-436f-9f8a-6be204085ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 40.113138ms Nov 16 10:46:59.770: INFO: Pod "pod-f87a9442-a7ad-436f-9f8a-6be204085ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226070932s Nov 16 10:47:01.773: INFO: Pod "pod-f87a9442-a7ad-436f-9f8a-6be204085ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229402618s Nov 16 10:47:03.777: INFO: Pod "pod-f87a9442-a7ad-436f-9f8a-6be204085ad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.233521456s STEP: Saw pod success Nov 16 10:47:03.777: INFO: Pod "pod-f87a9442-a7ad-436f-9f8a-6be204085ad8" satisfied condition "Succeeded or Failed" Nov 16 10:47:03.780: INFO: Trying to get logs from node latest-worker pod pod-f87a9442-a7ad-436f-9f8a-6be204085ad8 container test-container: STEP: delete the pod Nov 16 10:47:03.816: INFO: Waiting for pod pod-f87a9442-a7ad-436f-9f8a-6be204085ad8 to disappear Nov 16 10:47:03.834: INFO: Pod pod-f87a9442-a7ad-436f-9f8a-6be204085ad8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:47:03.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7409" for this suite. • [SLOW TEST:6.384 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":301,"skipped":4925,"failed":0} SS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:47:03.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Nov 16 10:47:03.943: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Nov 16 10:47:03.991: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Nov 16 10:47:03.991: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Nov 16 10:47:04.004: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Nov 16 10:47:04.004: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Nov 16 10:47:04.171: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Nov 16 10:47:04.171: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Nov 16 10:47:12.591: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:47:12.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9414" for this suite. • [SLOW TEST:8.791 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":302,"skipped":4927,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 16 10:47:12.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 16 10:47:13.438: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 16 10:47:15.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741120433, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741120433, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741120433, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741120433, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 16 10:47:17.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741120433, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741120433, loc:(*time.Location)(0x77108c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741120433, loc:(*time.Location)(0x77108c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741120433, loc:(*time.Location)(0x77108c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 16 10:47:20.735: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Nov 16 10:47:26.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config attach --namespace=webhook-7998 to-be-attached-pod -i -c=container1' Nov 16 10:47:26.389: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 16 10:47:26.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7998" for this suite. STEP: Destroying namespace "webhook-7998-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.929 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":303,"skipped":4928,"failed":0} SSSNov 16 10:47:26.581: INFO: Running AfterSuite actions on all nodes Nov 16 10:47:26.581: INFO: Running AfterSuite actions on node 1 Nov 16 10:47:26.581: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4931,"failed":0} Ran 303 of 5234 Specs in 6275.936 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4931 Skipped PASS