Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634949708 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 23 00:41:50.278: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.281: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 00:41:50.310: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 00:41:50.382: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 00:41:50.382: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 00:41:50.382: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 00:41:50.382: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 00:41:50.382: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 00:41:50.400: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 00:41:50.400: INFO: e2e test version: v1.21.5 Oct 23 00:41:50.401: INFO: kube-apiserver version: v1.21.1 Oct 23 00:41:50.402: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.408: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Oct 23 00:41:50.403: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.423: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 23 00:41:50.425: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.447: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ Oct 23 00:41:50.442: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.463: INFO: Cluster IP family: ipv4 Oct 23 00:41:50.442: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.465: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Oct 23 00:41:50.447: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.469: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 23 00:41:50.448: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.471: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 23 00:41:50.451: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.472: INFO: Cluster IP family: ipv4 Oct 23 00:41:50.450: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.473: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 23 00:41:50.451: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:41:50.474: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W1023 00:41:50.489933 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.490: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.491: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Oct 23 00:41:50.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3291 cluster-info' Oct 23 00:41:50.750: INFO: stderr: "" Oct 23 00:41:50.750: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:41:50.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3291" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test W1023 00:41:50.486664 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.486: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.490: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:41:50.508: INFO: The status of Pod busybox-scheduling-5b3c5768-203e-4e37-a134-311665b00673 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:52.512: INFO: The status of Pod busybox-scheduling-5b3c5768-203e-4e37-a134-311665b00673 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:54.513: INFO: The status of Pod busybox-scheduling-5b3c5768-203e-4e37-a134-311665b00673 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:56.512: INFO: The status of Pod busybox-scheduling-5b3c5768-203e-4e37-a134-311665b00673 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:58.515: INFO: The status of Pod busybox-scheduling-5b3c5768-203e-4e37-a134-311665b00673 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:00.511: INFO: The status of Pod busybox-scheduling-5b3c5768-203e-4e37-a134-311665b00673 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:00.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2746" for this suite. • [SLOW TEST:10.071 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events W1023 00:41:50.525027 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.525: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.526: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 23 00:41:56.552: INFO: &Pod{ObjectMeta:{send-events-e10baa47-d131-40c8-b509-64638ae5a0f6 events-4457 fe3b17b7-6d70-4bd1-a077-e20f0b8ddca8 56274 0 2021-10-23 00:41:50 +0000 UTC map[name:foo time:529182638] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.234" ], "mac": "5a:13:b9:ce:b8:b9", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.234" ], "mac": "5a:13:b9:ce:b8:b9", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-23 00:41:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:41:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:41:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.234\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-62q54,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-62q54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:41:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:41:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:41:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:41:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.234,StartTime:2021-10-23 00:41:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:41:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://debbf811c04f7f4d188aa6dcfd5585671c04286b091d09946f0a870a20912897,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 23 00:41:58.556: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 23 00:42:00.561: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:00.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4457" for this suite. • [SLOW TEST:10.081 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W1023 00:41:50.493286 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.493: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.495: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-c74a79b1-79f9-4901-a2e3-7550e08c078a STEP: Creating a pod to test consume configMaps Oct 23 00:41:50.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457" in namespace "configmap-4800" to be "Succeeded or Failed" Oct 23 00:41:50.518: INFO: Pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398961ms Oct 23 00:41:52.521: INFO: Pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007853583s Oct 23 00:41:54.524: INFO: Pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010322363s Oct 23 00:41:56.528: INFO: Pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014070212s Oct 23 00:41:58.532: INFO: Pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018006079s Oct 23 00:42:00.534: INFO: Pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020846249s Oct 23 00:42:02.541: INFO: Pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.027224468s STEP: Saw pod success Oct 23 00:42:02.541: INFO: Pod "pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457" satisfied condition "Succeeded or Failed" Oct 23 00:42:02.544: INFO: Trying to get logs from node node2 pod pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457 container agnhost-container: STEP: delete the pod Oct 23 00:42:02.558: INFO: Waiting for pod pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457 to disappear Oct 23 00:42:02.560: INFO: Pod pod-configmaps-931859c8-89f8-49b4-98dc-930314d50457 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:02.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4800" for this suite. • [SLOW TEST:12.099 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-b046d85a-414e-4de7-bfb0-b63c6b101628 STEP: Creating a pod to test consume configMaps Oct 23 00:41:50.806: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915" in namespace "projected-4697" to be "Succeeded or Failed" Oct 23 00:41:50.809: INFO: Pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550092ms Oct 23 00:41:52.812: INFO: Pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005644186s Oct 23 00:41:54.816: INFO: Pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010140415s Oct 23 00:41:56.820: INFO: Pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013943841s Oct 23 00:41:58.824: INFO: Pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017941008s Oct 23 00:42:00.827: INFO: Pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020869534s Oct 23 00:42:02.832: INFO: Pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025972153s STEP: Saw pod success Oct 23 00:42:02.832: INFO: Pod "pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915" satisfied condition "Succeeded or Failed" Oct 23 00:42:02.835: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915 container agnhost-container: STEP: delete the pod Oct 23 00:42:02.848: INFO: Waiting for pod pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915 to disappear Oct 23 00:42:02.850: INFO: Pod pod-projected-configmaps-f249b322-3c57-4049-90ed-c471c7c56915 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:02.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4697" for this suite. • [SLOW TEST:12.085 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W1023 00:41:50.534425 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.534: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.536: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Oct 23 00:41:50.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5941 create -f -' Oct 23 00:41:50.894: INFO: stderr: "" Oct 23 00:41:50.894: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 23 00:41:51.898: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:51.898: INFO: Found 0 / 1 Oct 23 00:41:52.897: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:52.897: INFO: Found 0 / 1 Oct 23 00:41:53.897: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:53.897: INFO: Found 0 / 1 Oct 23 00:41:54.897: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:54.897: INFO: Found 0 / 1 Oct 23 00:41:55.899: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:55.899: INFO: Found 0 / 1 Oct 23 00:41:56.900: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:56.900: INFO: Found 0 / 1 Oct 23 00:41:57.899: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:57.899: INFO: Found 0 / 1 Oct 23 00:41:58.899: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:58.899: INFO: Found 0 / 1 Oct 23 00:41:59.900: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:41:59.900: INFO: Found 0 / 1 Oct 23 00:42:00.897: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:42:00.897: INFO: Found 0 / 1 Oct 23 00:42:01.898: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:42:01.898: INFO: Found 0 / 1 Oct 23 00:42:02.897: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:42:02.897: INFO: Found 1 / 1 Oct 23 00:42:02.897: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 23 00:42:02.899: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:42:02.899: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 23 00:42:02.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5941 patch pod agnhost-primary-t8hnn -p {"metadata":{"annotations":{"x":"y"}}}' Oct 23 00:42:03.067: INFO: stderr: "" Oct 23 00:42:03.067: INFO: stdout: "pod/agnhost-primary-t8hnn patched\n" STEP: checking annotations Oct 23 00:42:03.070: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:42:03.070: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:03.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5941" for this suite. • [SLOW TEST:12.578 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1023 00:41:50.542192 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.542: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.544: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-6178 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6178 to expose endpoints map[] Oct 23 00:41:50.562: INFO: successfully validated that service multi-endpoint-test in namespace services-6178 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6178 Oct 23 00:41:50.576: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:52.580: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:54.579: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:56.580: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6178 to expose endpoints map[pod1:[100]] Oct 23 00:41:56.590: INFO: successfully validated that service multi-endpoint-test in namespace services-6178 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-6178 Oct 23 00:41:56.603: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:58.608: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:00.606: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:02.607: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:04.609: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:06.608: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6178 to expose endpoints map[pod1:[100] pod2:[101]] Oct 23 00:42:06.624: INFO: successfully validated that service multi-endpoint-test in namespace services-6178 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-6178 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6178 to expose endpoints map[pod2:[101]] Oct 23 00:42:06.636: INFO: successfully validated that service multi-endpoint-test in namespace services-6178 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-6178 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6178 to expose endpoints map[] Oct 23 00:42:06.646: INFO: successfully validated that service multi-endpoint-test in namespace services-6178 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:06.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6178" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:16.171 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W1023 00:41:50.585562 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.585: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.587: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-0c0d29e8-62e0-428d-b6a8-0cf0d605c0b6 STEP: Creating configMap with name cm-test-opt-upd-c8dff7e6-28c9-4d2b-bc9a-b0d179f866b0 STEP: Creating the pod Oct 23 00:41:50.615: INFO: The status of Pod pod-configmaps-63908d56-3455-4486-89f1-4a5495a4df58 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:52.618: INFO: The status of Pod pod-configmaps-63908d56-3455-4486-89f1-4a5495a4df58 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:54.621: INFO: The status of Pod pod-configmaps-63908d56-3455-4486-89f1-4a5495a4df58 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:56.622: INFO: The status of Pod pod-configmaps-63908d56-3455-4486-89f1-4a5495a4df58 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:58.622: INFO: The status of Pod pod-configmaps-63908d56-3455-4486-89f1-4a5495a4df58 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:00.619: INFO: The status of Pod pod-configmaps-63908d56-3455-4486-89f1-4a5495a4df58 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:02.619: INFO: The status of Pod pod-configmaps-63908d56-3455-4486-89f1-4a5495a4df58 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-0c0d29e8-62e0-428d-b6a8-0cf0d605c0b6 STEP: Updating configmap cm-test-opt-upd-c8dff7e6-28c9-4d2b-bc9a-b0d179f866b0 STEP: Creating configMap with name cm-test-opt-create-c8503bbe-d2f9-43be-a1b3-3a302a66dc4f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:06.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-223" for this suite. • [SLOW TEST:16.120 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":37,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1023 00:41:50.519783 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.520: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.521: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 23 00:41:50.542: INFO: The status of Pod labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:52.545: INFO: The status of Pod labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:54.547: INFO: The status of Pod labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:56.547: INFO: The status of Pod labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:58.548: INFO: The status of Pod labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:00.546: INFO: The status of Pod labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:02.547: INFO: The status of Pod labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:04.547: INFO: The status of Pod labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97 is Running (Ready = true) Oct 23 00:42:05.065: INFO: Successfully updated pod "labelsupdate49c8a1a6-90e1-4439-bc5d-b2fae537ed97" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:07.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5961" for this suite. • [SLOW TEST:16.670 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:06.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-37ffb2e9-a0d9-4530-bf41-b22b3758db70 STEP: Creating a pod to test consume configMaps Oct 23 00:42:06.737: INFO: Waiting up to 5m0s for pod "pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8" in namespace "configmap-7135" to be "Succeeded or Failed" Oct 23 00:42:06.741: INFO: Pod "pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.227473ms Oct 23 00:42:08.744: INFO: Pod "pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006467722s Oct 23 00:42:10.747: INFO: Pod "pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009719314s Oct 23 00:42:12.751: INFO: Pod "pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01325541s Oct 23 00:42:14.753: INFO: Pod "pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015927887s STEP: Saw pod success Oct 23 00:42:14.753: INFO: Pod "pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8" satisfied condition "Succeeded or Failed" Oct 23 00:42:14.755: INFO: Trying to get logs from node node2 pod pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8 container agnhost-container: STEP: delete the pod Oct 23 00:42:14.813: INFO: Waiting for pod pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8 to disappear Oct 23 00:42:14.815: INFO: Pod pod-configmaps-396691eb-e232-45e3-8534-c5a10c5489b8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:14.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7135" for this suite. • [SLOW TEST:8.123 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":44,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:07.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:15.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-616" for this suite. • [SLOW TEST:8.111 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} SSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:03.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:42:03.108: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 23 00:42:08.114: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 23 00:42:18.121: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 00:42:18.135: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7472 96e847e1-ff84-46db-aa00-ec8109a5a288 56900 1 2021-10-23 00:42:18 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-10-23 00:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004420048 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Oct 23 00:42:18.138: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Oct 23 00:42:18.138: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Oct 23 00:42:18.138: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7472 4da3143e-40d9-4adc-afba-8c2602322790 56901 1 2021-10-23 00:42:03 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 96e847e1-ff84-46db-aa00-ec8109a5a288 0xc004420367 0xc004420368}] [] [{e2e.test Update apps/v1 2021-10-23 00:42:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 00:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"96e847e1-ff84-46db-aa00-ec8109a5a288\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004420408 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:42:18.141: INFO: Pod "test-cleanup-controller-2bzfs" is available: &Pod{ObjectMeta:{test-cleanup-controller-2bzfs test-cleanup-controller- deployment-7472 9a2de85e-27a1-4741-bca1-a5e5d7f2bcc6 56857 0 2021-10-23 00:42:03 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.244" ], "mac": "02:11:07:c5:cc:c4", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.244" ], "mac": "02:11:07:c5:cc:c4", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller 4da3143e-40d9-4adc-afba-8c2602322790 0xc0044206f7 0xc0044206f8}] [] [{kube-controller-manager Update v1 2021-10-23 00:42:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4da3143e-40d9-4adc-afba-8c2602322790\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:42:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:42:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.244\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-57fxp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-57fxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:42:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:42:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:42:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:42:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.244,StartTime:2021-10-23 00:42:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:42:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://6e58815facaa1a4851f0d2d27f2bd44a3f191f44a05236eceec0b5d214c20dd0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:18.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7472" for this suite. • [SLOW TEST:15.068 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1023 00:41:50.561790 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.562: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.563: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:41:50.581: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:52.584: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:54.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:56.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:58.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:00.584: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:02.584: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:04.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:06.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:08.584: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:10.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:12.586: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:14.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:16.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:18.586: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:20.584: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:22.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = false) Oct 23 00:42:24.585: INFO: The status of Pod test-webserver-1831ab8d-6431-4441-bca8-8d4251f9e8d0 is Running (Ready = true) Oct 23 00:42:24.587: INFO: Container started at 2021-10-23 00:41:57 +0000 UTC, pod became ready at 2021-10-23 00:42:20 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:24.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2010" for this suite. • [SLOW TEST:34.064 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:18.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:42:18.508: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:42:20.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:42:22.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:42:24.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546538, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:42:27.533: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 23 00:42:27.547: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:27.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8020" for this suite. STEP: Destroying namespace "webhook-8020-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.430 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:27.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Oct 23 00:42:27.665: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Oct 23 00:42:27.679: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:27.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2518" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":4,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:15.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 23 00:42:15.357: INFO: The status of Pod annotationupdate2c15d037-42fa-4b7a-ae81-8d1c37f62e2b is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:17.361: INFO: The status of Pod annotationupdate2c15d037-42fa-4b7a-ae81-8d1c37f62e2b is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:19.361: INFO: The status of Pod annotationupdate2c15d037-42fa-4b7a-ae81-8d1c37f62e2b is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:21.363: INFO: The status of Pod annotationupdate2c15d037-42fa-4b7a-ae81-8d1c37f62e2b is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:23.362: INFO: The status of Pod annotationupdate2c15d037-42fa-4b7a-ae81-8d1c37f62e2b is Running (Ready = true) Oct 23 00:42:23.879: INFO: Successfully updated pod "annotationupdate2c15d037-42fa-4b7a-ae81-8d1c37f62e2b" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:27.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6748" for this suite. • [SLOW TEST:12.589 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:00.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Oct 23 00:42:00.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 create -f -' Oct 23 00:42:01.042: INFO: stderr: "" Oct 23 00:42:01.042: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 23 00:42:01.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:42:01.183: INFO: stderr: "" Oct 23 00:42:01.183: INFO: stdout: "update-demo-nautilus-lp4qk update-demo-nautilus-vt9g2 " Oct 23 00:42:01.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-lp4qk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:01.334: INFO: stderr: "" Oct 23 00:42:01.335: INFO: stdout: "" Oct 23 00:42:01.335: INFO: update-demo-nautilus-lp4qk is created but not running Oct 23 00:42:06.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:42:06.516: INFO: stderr: "" Oct 23 00:42:06.517: INFO: stdout: "update-demo-nautilus-lp4qk update-demo-nautilus-vt9g2 " Oct 23 00:42:06.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-lp4qk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:06.684: INFO: stderr: "" Oct 23 00:42:06.684: INFO: stdout: "" Oct 23 00:42:06.684: INFO: update-demo-nautilus-lp4qk is created but not running Oct 23 00:42:11.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:42:11.871: INFO: stderr: "" Oct 23 00:42:11.871: INFO: stdout: "update-demo-nautilus-lp4qk update-demo-nautilus-vt9g2 " Oct 23 00:42:11.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-lp4qk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:12.030: INFO: stderr: "" Oct 23 00:42:12.030: INFO: stdout: "" Oct 23 00:42:12.030: INFO: update-demo-nautilus-lp4qk is created but not running Oct 23 00:42:17.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:42:17.201: INFO: stderr: "" Oct 23 00:42:17.201: INFO: stdout: "update-demo-nautilus-lp4qk update-demo-nautilus-vt9g2 " Oct 23 00:42:17.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-lp4qk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:17.357: INFO: stderr: "" Oct 23 00:42:17.358: INFO: stdout: "true" Oct 23 00:42:17.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-lp4qk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 00:42:17.523: INFO: stderr: "" Oct 23 00:42:17.523: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 00:42:17.523: INFO: validating pod update-demo-nautilus-lp4qk Oct 23 00:42:17.526: INFO: got data: { "image": "nautilus.jpg" } Oct 23 00:42:17.526: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 00:42:17.526: INFO: update-demo-nautilus-lp4qk is verified up and running Oct 23 00:42:17.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-vt9g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:17.695: INFO: stderr: "" Oct 23 00:42:17.695: INFO: stdout: "true" Oct 23 00:42:17.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-vt9g2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 00:42:17.859: INFO: stderr: "" Oct 23 00:42:17.859: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 00:42:17.859: INFO: validating pod update-demo-nautilus-vt9g2 Oct 23 00:42:17.862: INFO: got data: { "image": "nautilus.jpg" } Oct 23 00:42:17.863: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 00:42:17.863: INFO: update-demo-nautilus-vt9g2 is verified up and running STEP: scaling down the replication controller Oct 23 00:42:17.871: INFO: scanned /root for discovery docs: Oct 23 00:42:17.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Oct 23 00:42:18.093: INFO: stderr: "" Oct 23 00:42:18.093: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 23 00:42:18.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:42:18.291: INFO: stderr: "" Oct 23 00:42:18.291: INFO: stdout: "update-demo-nautilus-lp4qk update-demo-nautilus-vt9g2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 23 00:42:23.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:42:23.467: INFO: stderr: "" Oct 23 00:42:23.467: INFO: stdout: "update-demo-nautilus-vt9g2 " Oct 23 00:42:23.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-vt9g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:23.630: INFO: stderr: "" Oct 23 00:42:23.630: INFO: stdout: "true" Oct 23 00:42:23.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-vt9g2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 00:42:23.791: INFO: stderr: "" Oct 23 00:42:23.791: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 00:42:23.791: INFO: validating pod update-demo-nautilus-vt9g2 Oct 23 00:42:23.794: INFO: got data: { "image": "nautilus.jpg" } Oct 23 00:42:23.794: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 00:42:23.794: INFO: update-demo-nautilus-vt9g2 is verified up and running STEP: scaling up the replication controller Oct 23 00:42:23.802: INFO: scanned /root for discovery docs: Oct 23 00:42:23.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Oct 23 00:42:24.020: INFO: stderr: "" Oct 23 00:42:24.020: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 23 00:42:24.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:42:24.204: INFO: stderr: "" Oct 23 00:42:24.204: INFO: stdout: "update-demo-nautilus-9mrjr update-demo-nautilus-vt9g2 " Oct 23 00:42:24.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-9mrjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:24.375: INFO: stderr: "" Oct 23 00:42:24.375: INFO: stdout: "" Oct 23 00:42:24.375: INFO: update-demo-nautilus-9mrjr is created but not running Oct 23 00:42:29.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:42:29.561: INFO: stderr: "" Oct 23 00:42:29.561: INFO: stdout: "update-demo-nautilus-9mrjr update-demo-nautilus-vt9g2 " Oct 23 00:42:29.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-9mrjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:29.719: INFO: stderr: "" Oct 23 00:42:29.719: INFO: stdout: "true" Oct 23 00:42:29.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-9mrjr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 00:42:29.890: INFO: stderr: "" Oct 23 00:42:29.890: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 00:42:29.890: INFO: validating pod update-demo-nautilus-9mrjr Oct 23 00:42:29.894: INFO: got data: { "image": "nautilus.jpg" } Oct 23 00:42:29.894: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 00:42:29.894: INFO: update-demo-nautilus-9mrjr is verified up and running Oct 23 00:42:29.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-vt9g2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:42:30.059: INFO: stderr: "" Oct 23 00:42:30.059: INFO: stdout: "true" Oct 23 00:42:30.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods update-demo-nautilus-vt9g2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 00:42:30.212: INFO: stderr: "" Oct 23 00:42:30.212: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 00:42:30.212: INFO: validating pod update-demo-nautilus-vt9g2 Oct 23 00:42:30.215: INFO: got data: { "image": "nautilus.jpg" } Oct 23 00:42:30.215: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 00:42:30.215: INFO: update-demo-nautilus-vt9g2 is verified up and running STEP: using delete to clean up resources Oct 23 00:42:30.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 delete --grace-period=0 --force -f -' Oct 23 00:42:30.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:42:30.356: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 23 00:42:30.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get rc,svc -l name=update-demo --no-headers' Oct 23 00:42:30.559: INFO: stderr: "No resources found in kubectl-6559 namespace.\n" Oct 23 00:42:30.559: INFO: stdout: "" Oct 23 00:42:30.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6559 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 23 00:42:30.721: INFO: stderr: "" Oct 23 00:42:30.721: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:30.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6559" for this suite. • [SLOW TEST:30.062 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":2,"skipped":80,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:00.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-3632 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-3632 Oct 23 00:42:00.857: INFO: Found 0 stateful pods, waiting for 1 Oct 23 00:42:10.860: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Oct 23 00:42:20.861: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 00:42:20.883: INFO: Deleting all statefulset in ns statefulset-3632 Oct 23 00:42:20.886: INFO: Scaling statefulset ss to 0 Oct 23 00:42:30.900: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:42:30.902: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:30.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3632" for this suite. • [SLOW TEST:30.093 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":2,"skipped":125,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:27.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 00:42:31.843: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:31.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6109" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":57,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:30.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:42:30.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28b7b411-b61c-4c15-8652-f0b03ac7e67f" in namespace "projected-3359" to be "Succeeded or Failed" Oct 23 00:42:30.773: INFO: Pod "downwardapi-volume-28b7b411-b61c-4c15-8652-f0b03ac7e67f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.998406ms Oct 23 00:42:32.777: INFO: Pod "downwardapi-volume-28b7b411-b61c-4c15-8652-f0b03ac7e67f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006992761s Oct 23 00:42:34.785: INFO: Pod "downwardapi-volume-28b7b411-b61c-4c15-8652-f0b03ac7e67f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015329343s STEP: Saw pod success Oct 23 00:42:34.785: INFO: Pod "downwardapi-volume-28b7b411-b61c-4c15-8652-f0b03ac7e67f" satisfied condition "Succeeded or Failed" Oct 23 00:42:34.788: INFO: Trying to get logs from node node1 pod downwardapi-volume-28b7b411-b61c-4c15-8652-f0b03ac7e67f container client-container: STEP: delete the pod Oct 23 00:42:34.801: INFO: Waiting for pod downwardapi-volume-28b7b411-b61c-4c15-8652-f0b03ac7e67f to disappear Oct 23 00:42:34.803: INFO: Pod downwardapi-volume-28b7b411-b61c-4c15-8652-f0b03ac7e67f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:34.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3359" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":82,"failed":0} [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:34.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:34.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3523" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":4,"skipped":82,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:02.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 23 00:42:02.916: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:04.920: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:06.921: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:08.920: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:10.919: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:12.921: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:14.919: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:16.920: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:18.919: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 23 00:42:18.933: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:20.936: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:22.938: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:24.937: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:26.936: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 23 00:42:26.974: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 00:42:26.977: INFO: Pod pod-with-poststart-http-hook still exists Oct 23 00:42:28.978: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 00:42:28.981: INFO: Pod pod-with-poststart-http-hook still exists Oct 23 00:42:30.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 00:42:30.980: INFO: Pod pod-with-poststart-http-hook still exists Oct 23 00:42:32.979: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 00:42:32.982: INFO: Pod pod-with-poststart-http-hook still exists Oct 23 00:42:34.978: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 23 00:42:34.980: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:34.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3074" for this suite. • [SLOW TEST:32.107 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:34.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:42:34.911: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 23 00:42:42.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 --namespace=crd-publish-openapi-9398 create -f -' Oct 23 00:42:43.357: INFO: stderr: "" Oct 23 00:42:43.357: INFO: stdout: "e2e-test-crd-publish-openapi-5922-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 23 00:42:43.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 --namespace=crd-publish-openapi-9398 delete e2e-test-crd-publish-openapi-5922-crds test-foo' Oct 23 00:42:43.507: INFO: stderr: "" Oct 23 00:42:43.507: INFO: stdout: "e2e-test-crd-publish-openapi-5922-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 23 00:42:43.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 --namespace=crd-publish-openapi-9398 apply -f -' Oct 23 00:42:43.854: INFO: stderr: "" Oct 23 00:42:43.854: INFO: stdout: "e2e-test-crd-publish-openapi-5922-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 23 00:42:43.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 --namespace=crd-publish-openapi-9398 delete e2e-test-crd-publish-openapi-5922-crds test-foo' Oct 23 00:42:44.008: INFO: stderr: "" Oct 23 00:42:44.009: INFO: stdout: "e2e-test-crd-publish-openapi-5922-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 23 00:42:44.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 --namespace=crd-publish-openapi-9398 create -f -' Oct 23 00:42:44.302: INFO: rc: 1 Oct 23 00:42:44.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 --namespace=crd-publish-openapi-9398 apply -f -' Oct 23 00:42:44.576: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 23 00:42:44.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 --namespace=crd-publish-openapi-9398 create -f -' Oct 23 00:42:44.878: INFO: rc: 1 Oct 23 00:42:44.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 --namespace=crd-publish-openapi-9398 apply -f -' Oct 23 00:42:45.192: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 23 00:42:45.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 explain e2e-test-crd-publish-openapi-5922-crds' Oct 23 00:42:45.512: INFO: stderr: "" Oct 23 00:42:45.512: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5922-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 23 00:42:45.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 explain e2e-test-crd-publish-openapi-5922-crds.metadata' Oct 23 00:42:45.811: INFO: stderr: "" Oct 23 00:42:45.811: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5922-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 23 00:42:45.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 explain e2e-test-crd-publish-openapi-5922-crds.spec' Oct 23 00:42:46.140: INFO: stderr: "" Oct 23 00:42:46.140: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5922-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 23 00:42:46.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 explain e2e-test-crd-publish-openapi-5922-crds.spec.bars' Oct 23 00:42:46.462: INFO: stderr: "" Oct 23 00:42:46.462: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5922-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 23 00:42:46.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9398 explain e2e-test-crd-publish-openapi-5922-crds.spec.bars2' Oct 23 00:42:46.773: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:49.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9398" for this suite. • [SLOW TEST:14.861 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":5,"skipped":92,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:49.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-8dbed427-a7ff-4466-91fa-4f78f1cbadc0 STEP: Creating a pod to test consume configMaps Oct 23 00:42:49.807: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cfd0390-5dc3-4043-a872-9f31735a8409" in namespace "configmap-9799" to be "Succeeded or Failed" Oct 23 00:42:49.809: INFO: Pod "pod-configmaps-8cfd0390-5dc3-4043-a872-9f31735a8409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.373541ms Oct 23 00:42:51.812: INFO: Pod "pod-configmaps-8cfd0390-5dc3-4043-a872-9f31735a8409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005071027s Oct 23 00:42:53.814: INFO: Pod "pod-configmaps-8cfd0390-5dc3-4043-a872-9f31735a8409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007733843s STEP: Saw pod success Oct 23 00:42:53.814: INFO: Pod "pod-configmaps-8cfd0390-5dc3-4043-a872-9f31735a8409" satisfied condition "Succeeded or Failed" Oct 23 00:42:53.817: INFO: Trying to get logs from node node2 pod pod-configmaps-8cfd0390-5dc3-4043-a872-9f31735a8409 container agnhost-container: STEP: delete the pod Oct 23 00:42:53.842: INFO: Waiting for pod pod-configmaps-8cfd0390-5dc3-4043-a872-9f31735a8409 to disappear Oct 23 00:42:53.844: INFO: Pod pod-configmaps-8cfd0390-5dc3-4043-a872-9f31735a8409 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:53.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9799" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:54.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 00:42:54.100: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 23 00:42:54.104: INFO: starting watch STEP: patching STEP: updating Oct 23 00:42:54.114: INFO: waiting for watch events with expected annotations Oct 23 00:42:54.114: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:54.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-4758" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":7,"skipped":188,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:54.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:54.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-750" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":8,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:54.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:42:54.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23d43db4-4ac0-4d2b-a962-1030f2569571" in namespace "downward-api-5358" to be "Succeeded or Failed" Oct 23 00:42:54.315: INFO: Pod "downwardapi-volume-23d43db4-4ac0-4d2b-a962-1030f2569571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300153ms Oct 23 00:42:56.318: INFO: Pod "downwardapi-volume-23d43db4-4ac0-4d2b-a962-1030f2569571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006049207s Oct 23 00:42:58.322: INFO: Pod "downwardapi-volume-23d43db4-4ac0-4d2b-a962-1030f2569571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009331197s STEP: Saw pod success Oct 23 00:42:58.322: INFO: Pod "downwardapi-volume-23d43db4-4ac0-4d2b-a962-1030f2569571" satisfied condition "Succeeded or Failed" Oct 23 00:42:58.324: INFO: Trying to get logs from node node2 pod downwardapi-volume-23d43db4-4ac0-4d2b-a962-1030f2569571 container client-container: STEP: delete the pod Oct 23 00:42:58.335: INFO: Waiting for pod downwardapi-volume-23d43db4-4ac0-4d2b-a962-1030f2569571 to disappear Oct 23 00:42:58.337: INFO: Pod downwardapi-volume-23d43db4-4ac0-4d2b-a962-1030f2569571 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:42:58.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5358" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":225,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:58.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Oct 23 00:42:58.416: INFO: Waiting up to 5m0s for pod "client-containers-9415d49f-6a03-49a9-bd47-498f56dcd9af" in namespace "containers-1902" to be "Succeeded or Failed" Oct 23 00:42:58.419: INFO: Pod "client-containers-9415d49f-6a03-49a9-bd47-498f56dcd9af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566129ms Oct 23 00:43:00.422: INFO: Pod "client-containers-9415d49f-6a03-49a9-bd47-498f56dcd9af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005942526s Oct 23 00:43:02.427: INFO: Pod "client-containers-9415d49f-6a03-49a9-bd47-498f56dcd9af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011155409s STEP: Saw pod success Oct 23 00:43:02.427: INFO: Pod "client-containers-9415d49f-6a03-49a9-bd47-498f56dcd9af" satisfied condition "Succeeded or Failed" Oct 23 00:43:02.429: INFO: Trying to get logs from node node2 pod client-containers-9415d49f-6a03-49a9-bd47-498f56dcd9af container agnhost-container: STEP: delete the pod Oct 23 00:43:02.440: INFO: Waiting for pod client-containers-9415d49f-6a03-49a9-bd47-498f56dcd9af to disappear Oct 23 00:43:02.442: INFO: Pod client-containers-9415d49f-6a03-49a9-bd47-498f56dcd9af no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:02.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1902" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":240,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:02.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:02.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2635" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":11,"skipped":255,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:02.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-68e5a4bc-05ad-4ece-a6d6-d4be8af57f08 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:02.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6223" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":12,"skipped":257,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:02.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:43:03.050: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:43:05.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546583, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546583, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546583, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546583, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:43:08.069: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:08.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9206" for this suite. STEP: Destroying namespace "webhook-9206-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.564 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":13,"skipped":265,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:02.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W1023 00:42:08.673329 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:43:10.688: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:10.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8916" for this suite. • [SLOW TEST:68.082 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:41:50.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1023 00:41:50.546305 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 00:41:50.546: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 00:41:50.548: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2657 Oct 23 00:41:50.564: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:52.571: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:41:54.568: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 23 00:41:54.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2657 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 23 00:41:54.849: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Oct 23 00:41:54.849: INFO: stdout: "iptables" Oct 23 00:41:54.849: INFO: proxyMode: iptables Oct 23 00:41:54.856: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 23 00:41:54.858: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2657 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2657 I1023 00:41:54.868197 27 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2657, replica count: 3 I1023 00:41:57.919065 27 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:42:00.919278 27 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:42:03.919970 27 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:42:03.926: INFO: Creating new exec pod Oct 23 00:42:18.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2657 exec execpod-affinityj4mk6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Oct 23 00:42:20.239: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Oct 23 00:42:20.239: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:42:20.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2657 exec execpod-affinityj4mk6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.0.127 80' Oct 23 00:42:20.494: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.0.127 80\nConnection to 10.233.0.127 80 port [tcp/http] succeeded!\n" Oct 23 00:42:20.494: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:42:20.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2657 exec execpod-affinityj4mk6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.0.127:80/ ; done' Oct 23 00:42:20.816: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n" Oct 23 00:42:20.817: INFO: stdout: "\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c\naffinity-clusterip-timeout-62q7c" Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Received response from host: affinity-clusterip-timeout-62q7c Oct 23 00:42:20.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2657 exec execpod-affinityj4mk6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.0.127:80/' Oct 23 00:42:21.077: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n" Oct 23 00:42:21.078: INFO: stdout: "affinity-clusterip-timeout-62q7c" Oct 23 00:42:41.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2657 exec execpod-affinityj4mk6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.0.127:80/' Oct 23 00:42:41.327: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n" Oct 23 00:42:41.327: INFO: stdout: "affinity-clusterip-timeout-62q7c" Oct 23 00:43:01.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2657 exec execpod-affinityj4mk6 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.0.127:80/' Oct 23 00:43:01.762: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.0.127:80/\n" Oct 23 00:43:01.762: INFO: stdout: "affinity-clusterip-timeout-6zq25" Oct 23 00:43:01.762: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2657, will wait for the garbage collector to delete the pods Oct 23 00:43:01.828: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 4.839399ms Oct 23 00:43:01.929: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.791711ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:13.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2657" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:83.446 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:08.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 23 00:43:08.180: INFO: Waiting up to 5m0s for pod "pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7" in namespace "emptydir-2789" to be "Succeeded or Failed" Oct 23 00:43:08.183: INFO: Pod "pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.55423ms Oct 23 00:43:10.186: INFO: Pod "pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006199427s Oct 23 00:43:12.190: INFO: Pod "pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009439266s Oct 23 00:43:14.193: INFO: Pod "pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01238077s STEP: Saw pod success Oct 23 00:43:14.193: INFO: Pod "pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7" satisfied condition "Succeeded or Failed" Oct 23 00:43:14.195: INFO: Trying to get logs from node node2 pod pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7 container test-container: STEP: delete the pod Oct 23 00:43:14.233: INFO: Waiting for pod pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7 to disappear Oct 23 00:43:14.235: INFO: Pod pod-ad98031e-96c1-40f8-bbf5-14a5144ffdd7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:14.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2789" for this suite. • [SLOW TEST:6.096 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:14.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7614 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-7614 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7614 Oct 23 00:42:14.878: INFO: Found 0 stateful pods, waiting for 1 Oct 23 00:42:24.883: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 23 00:42:24.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7614 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:42:25.113: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:42:25.113: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:42:25.113: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:42:25.116: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 23 00:42:35.121: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 23 00:42:35.121: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:42:35.133: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:42:35.133: INFO: ss-0 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:42:35.133: INFO: Oct 23 00:42:35.133: INFO: StatefulSet ss has not reached scale 3, at 1 Oct 23 00:42:36.136: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997154581s Oct 23 00:42:37.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.994214097s Oct 23 00:42:38.144: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.990062047s Oct 23 00:42:39.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985952946s Oct 23 00:42:40.151: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.982063865s Oct 23 00:42:41.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.977930897s Oct 23 00:42:42.159: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.974639403s Oct 23 00:42:43.162: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.97112996s Oct 23 00:42:44.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 967.584721ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7614 Oct 23 00:42:45.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7614 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:42:45.414: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 00:42:45.414: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:42:45.414: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:42:45.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7614 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:42:45.642: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Oct 23 00:42:45.642: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:42:45.642: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:42:45.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7614 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:42:45.892: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Oct 23 00:42:45.892: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:42:45.892: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:42:45.895: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Oct 23 00:42:55.900: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:42:55.900: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:42:55.900: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 23 00:42:55.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7614 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:42:56.150: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:42:56.150: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:42:56.150: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:42:56.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7614 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:42:56.451: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:42:56.451: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:42:56.451: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:42:56.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7614 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:42:56.729: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:42:56.729: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:42:56.729: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:42:56.729: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:42:56.731: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Oct 23 00:43:06.737: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 23 00:43:06.737: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 23 00:43:06.737: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 23 00:43:06.748: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:06.748: INFO: ss-0 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:43:06.748: INFO: ss-1 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:06.748: INFO: ss-2 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:06.748: INFO: Oct 23 00:43:06.748: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 00:43:07.751: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:07.751: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:43:07.751: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:07.751: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:07.752: INFO: Oct 23 00:43:07.752: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 00:43:08.755: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:08.755: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:43:08.755: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:08.755: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:08.755: INFO: Oct 23 00:43:08.755: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 00:43:09.758: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:09.758: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:43:09.758: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:09.759: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:09.759: INFO: Oct 23 00:43:09.759: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 00:43:10.763: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:10.763: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:43:10.763: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:10.763: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:10.763: INFO: Oct 23 00:43:10.763: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 00:43:11.767: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:11.767: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:43:11.767: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:11.767: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:11.768: INFO: Oct 23 00:43:11.768: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 00:43:12.770: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:12.771: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:43:12.771: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:12.771: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:12.771: INFO: Oct 23 00:43:12.771: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 00:43:13.775: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:13.775: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:14 +0000 UTC }] Oct 23 00:43:13.775: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:13.775: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:13.775: INFO: Oct 23 00:43:13.775: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 23 00:43:14.779: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:14.779: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:14.779: INFO: ss-2 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:14.779: INFO: Oct 23 00:43:14.779: INFO: StatefulSet ss has not reached scale 0, at 2 Oct 23 00:43:15.783: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:43:15.783: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:42:35 +0000 UTC }] Oct 23 00:43:15.783: INFO: Oct 23 00:43:15.783: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7614 Oct 23 00:43:16.786: INFO: Scaling statefulset ss to 0 Oct 23 00:43:16.795: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 00:43:16.797: INFO: Deleting all statefulset in ns statefulset-7614 Oct 23 00:43:16.799: INFO: Scaling statefulset ss to 0 Oct 23 00:43:16.809: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:43:16.811: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:16.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7614" for this suite. • [SLOW TEST:61.979 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":3,"skipped":53,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:16.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:43:16.883: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-3647f3b2-e0b4-4f81-ab5a-8c3e882d1a07" in namespace "security-context-test-4223" to be "Succeeded or Failed" Oct 23 00:43:16.886: INFO: Pod "busybox-privileged-false-3647f3b2-e0b4-4f81-ab5a-8c3e882d1a07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377705ms Oct 23 00:43:18.889: INFO: Pod "busybox-privileged-false-3647f3b2-e0b4-4f81-ab5a-8c3e882d1a07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005539831s Oct 23 00:43:20.893: INFO: Pod "busybox-privileged-false-3647f3b2-e0b4-4f81-ab5a-8c3e882d1a07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009503892s Oct 23 00:43:22.898: INFO: Pod "busybox-privileged-false-3647f3b2-e0b4-4f81-ab5a-8c3e882d1a07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014040581s Oct 23 00:43:22.898: INFO: Pod "busybox-privileged-false-3647f3b2-e0b4-4f81-ab5a-8c3e882d1a07" satisfied condition "Succeeded or Failed" Oct 23 00:43:22.904: INFO: Got logs for pod "busybox-privileged-false-3647f3b2-e0b4-4f81-ab5a-8c3e882d1a07": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:22.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4223" for this suite. • [SLOW TEST:6.059 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":63,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:27.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 23 00:42:27.941: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8113 598651ea-31a2-49b6-9d89-716696a1bca1 57147 0 2021-10-23 00:42:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 00:42:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:42:27.942: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8113 598651ea-31a2-49b6-9d89-716696a1bca1 57147 0 2021-10-23 00:42:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 00:42:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 23 00:42:37.950: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8113 598651ea-31a2-49b6-9d89-716696a1bca1 57423 0 2021-10-23 00:42:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 00:42:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:42:37.950: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8113 598651ea-31a2-49b6-9d89-716696a1bca1 57423 0 2021-10-23 00:42:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 00:42:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 23 00:42:47.960: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8113 598651ea-31a2-49b6-9d89-716696a1bca1 57570 0 2021-10-23 00:42:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 00:42:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:42:47.960: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8113 598651ea-31a2-49b6-9d89-716696a1bca1 57570 0 2021-10-23 00:42:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 00:42:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 23 00:42:57.968: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8113 598651ea-31a2-49b6-9d89-716696a1bca1 57686 0 2021-10-23 00:42:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 00:42:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:42:57.968: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8113 598651ea-31a2-49b6-9d89-716696a1bca1 57686 0 2021-10-23 00:42:27 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-10-23 00:42:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 23 00:43:07.983: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8113 3a2f98bf-5aab-4903-8321-6cd9d423d6a6 57938 0 2021-10-23 00:43:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-23 00:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:43:07.983: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8113 3a2f98bf-5aab-4903-8321-6cd9d423d6a6 57938 0 2021-10-23 00:43:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-23 00:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 23 00:43:17.990: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8113 3a2f98bf-5aab-4903-8321-6cd9d423d6a6 58225 0 2021-10-23 00:43:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-23 00:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:43:17.991: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8113 3a2f98bf-5aab-4903-8321-6cd9d423d6a6 58225 0 2021-10-23 00:43:07 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-10-23 00:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:27.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8113" for this suite. • [SLOW TEST:60.088 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:13.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1147.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1147.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1147.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1147.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1147.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1147.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:43:28.027: INFO: DNS probes using dns-1147/dns-test-7a8146bf-5554-4478-9237-6000ff1474a7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:28.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1147" for this suite. • [SLOW TEST:14.079 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:14.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 23 00:43:14.331: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:43:16.335: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:43:18.336: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 23 00:43:18.353: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:43:20.355: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:43:22.360: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:43:24.355: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Oct 23 00:43:24.364: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 00:43:24.366: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 00:43:26.368: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 00:43:26.372: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 00:43:28.368: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 00:43:28.372: INFO: Pod pod-with-prestop-exec-hook still exists Oct 23 00:43:30.367: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 23 00:43:30.369: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:30.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5437" for this suite. • [SLOW TEST:16.099 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":284,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:28.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-2db3d79c-1b89-4763-afd9-da9239e3615f STEP: Creating secret with name secret-projected-all-test-volume-0adf0922-9624-4617-a6c7-f5d7db801ecb STEP: Creating a pod to test Check all projections for projected volume plugin Oct 23 00:43:28.088: INFO: Waiting up to 5m0s for pod "projected-volume-b6cf3c92-dcb3-4e94-9c73-8c63f0f23877" in namespace "projected-7808" to be "Succeeded or Failed" Oct 23 00:43:28.091: INFO: Pod "projected-volume-b6cf3c92-dcb3-4e94-9c73-8c63f0f23877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.552449ms Oct 23 00:43:30.095: INFO: Pod "projected-volume-b6cf3c92-dcb3-4e94-9c73-8c63f0f23877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006452585s Oct 23 00:43:32.101: INFO: Pod "projected-volume-b6cf3c92-dcb3-4e94-9c73-8c63f0f23877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012044585s STEP: Saw pod success Oct 23 00:43:32.101: INFO: Pod "projected-volume-b6cf3c92-dcb3-4e94-9c73-8c63f0f23877" satisfied condition "Succeeded or Failed" Oct 23 00:43:32.103: INFO: Trying to get logs from node node2 pod projected-volume-b6cf3c92-dcb3-4e94-9c73-8c63f0f23877 container projected-all-volume-test: STEP: delete the pod Oct 23 00:43:32.115: INFO: Waiting for pod projected-volume-b6cf3c92-dcb3-4e94-9c73-8c63f0f23877 to disappear Oct 23 00:43:32.117: INFO: Pod projected-volume-b6cf3c92-dcb3-4e94-9c73-8c63f0f23877 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:32.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7808" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:10.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1334 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1334 STEP: creating replication controller externalsvc in namespace services-1334 I1023 00:43:10.833723 25 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1334, replica count: 2 I1023 00:43:13.885784 25 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 23 00:43:13.901: INFO: Creating new exec pod Oct 23 00:43:17.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1334 exec execpod7bdx8 -- /bin/sh -x -c nslookup nodeport-service.services-1334.svc.cluster.local' Oct 23 00:43:18.174: INFO: stderr: "+ nslookup nodeport-service.services-1334.svc.cluster.local\n" Oct 23 00:43:18.174: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-1334.svc.cluster.local\tcanonical name = externalsvc.services-1334.svc.cluster.local.\nName:\texternalsvc.services-1334.svc.cluster.local\nAddress: 10.233.12.1\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1334, will wait for the garbage collector to delete the pods Oct 23 00:43:18.233: INFO: Deleting ReplicationController externalsvc took: 4.420634ms Oct 23 00:43:18.334: INFO: Terminating ReplicationController externalsvc pods took: 100.6956ms Oct 23 00:43:34.245: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:34.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1334" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.472 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":3,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:34.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:34.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9732" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":126,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:32.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 00:43:35.202: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:35.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3372" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:30.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-e3391bfa-b6ff-4e6e-89cf-6b3656fc084a STEP: Creating configMap with name cm-test-opt-upd-aca273a6-8216-4c58-8601-70c3624b9635 STEP: Creating the pod Oct 23 00:43:30.463: INFO: The status of Pod pod-projected-configmaps-04cb5bd5-f85f-48f8-908a-a5da6d22826d is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:43:32.469: INFO: The status of Pod pod-projected-configmaps-04cb5bd5-f85f-48f8-908a-a5da6d22826d is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:43:34.467: INFO: The status of Pod pod-projected-configmaps-04cb5bd5-f85f-48f8-908a-a5da6d22826d is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-e3391bfa-b6ff-4e6e-89cf-6b3656fc084a STEP: Updating configmap cm-test-opt-upd-aca273a6-8216-4c58-8601-70c3624b9635 STEP: Creating configMap with name cm-test-opt-create-7e496b84-2607-413b-842a-86b02b0df7dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:36.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6028" for this suite. • [SLOW TEST:6.112 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:24.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-df59cb66-4b27-4596-b3ac-ea3721db310f STEP: Creating the pod Oct 23 00:42:24.677: INFO: The status of Pod pod-configmaps-c24ad3d5-6595-4f17-b707-b2a8fa00e2e0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:26.683: INFO: The status of Pod pod-configmaps-c24ad3d5-6595-4f17-b707-b2a8fa00e2e0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:28.681: INFO: The status of Pod pod-configmaps-c24ad3d5-6595-4f17-b707-b2a8fa00e2e0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:42:30.680: INFO: The status of Pod pod-configmaps-c24ad3d5-6595-4f17-b707-b2a8fa00e2e0 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-df59cb66-4b27-4596-b3ac-ea3721db310f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:38.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3680" for this suite. • [SLOW TEST:73.576 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:35.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:43:35.301: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7ed7fde5-8d35-49fb-a887-c828807f865f", Controller:(*bool)(0xc004a37762), BlockOwnerDeletion:(*bool)(0xc004a37763)}} Oct 23 00:43:35.305: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6d49ccad-c5bf-4b01-aa08-c9d74c63f158", Controller:(*bool)(0xc00492050a), BlockOwnerDeletion:(*bool)(0xc00492050b)}} Oct 23 00:43:35.309: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"24311fe5-ed72-4a72-ba4d-2aef0ae20e14", Controller:(*bool)(0xc00497c4ea), BlockOwnerDeletion:(*bool)(0xc00497c4eb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:40.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7285" for this suite. • [SLOW TEST:5.092 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:38.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-2f8e61cc-4e68-41a7-8454-fe6cb61730cd STEP: Creating a pod to test consume configMaps Oct 23 00:43:38.254: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39360f4a-b811-42ee-b0ae-826ff74a50c5" in namespace "projected-3701" to be "Succeeded or Failed" Oct 23 00:43:38.259: INFO: Pod "pod-projected-configmaps-39360f4a-b811-42ee-b0ae-826ff74a50c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.780149ms Oct 23 00:43:40.262: INFO: Pod "pod-projected-configmaps-39360f4a-b811-42ee-b0ae-826ff74a50c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007736445s Oct 23 00:43:42.266: INFO: Pod "pod-projected-configmaps-39360f4a-b811-42ee-b0ae-826ff74a50c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011868937s STEP: Saw pod success Oct 23 00:43:42.266: INFO: Pod "pod-projected-configmaps-39360f4a-b811-42ee-b0ae-826ff74a50c5" satisfied condition "Succeeded or Failed" Oct 23 00:43:42.268: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-39360f4a-b811-42ee-b0ae-826ff74a50c5 container agnhost-container: STEP: delete the pod Oct 23 00:43:42.322: INFO: Waiting for pod pod-projected-configmaps-39360f4a-b811-42ee-b0ae-826ff74a50c5 to disappear Oct 23 00:43:42.324: INFO: Pod pod-projected-configmaps-39360f4a-b811-42ee-b0ae-826ff74a50c5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:42.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3701" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:28.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:44.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9210" for this suite. • [SLOW TEST:16.115 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:40.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 23 00:43:40.456: INFO: Waiting up to 5m0s for pod "security-context-94c0c609-e626-4215-82c9-fc6d5ffa0238" in namespace "security-context-8271" to be "Succeeded or Failed" Oct 23 00:43:40.459: INFO: Pod "security-context-94c0c609-e626-4215-82c9-fc6d5ffa0238": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224289ms Oct 23 00:43:42.463: INFO: Pod "security-context-94c0c609-e626-4215-82c9-fc6d5ffa0238": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006722026s Oct 23 00:43:44.466: INFO: Pod "security-context-94c0c609-e626-4215-82c9-fc6d5ffa0238": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009911095s STEP: Saw pod success Oct 23 00:43:44.466: INFO: Pod "security-context-94c0c609-e626-4215-82c9-fc6d5ffa0238" satisfied condition "Succeeded or Failed" Oct 23 00:43:44.469: INFO: Trying to get logs from node node2 pod security-context-94c0c609-e626-4215-82c9-fc6d5ffa0238 container test-container: STEP: delete the pod Oct 23 00:43:44.684: INFO: Waiting for pod security-context-94c0c609-e626-4215-82c9-fc6d5ffa0238 to disappear Oct 23 00:43:44.686: INFO: Pod security-context-94c0c609-e626-4215-82c9-fc6d5ffa0238 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:44.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8271" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:44.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:44.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-451" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":9,"skipped":130,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:34.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:45.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1541" for this suite. • [SLOW TEST:11.104 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":5,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:36.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:47.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5676" for this suite. • [SLOW TEST:11.065 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":17,"skipped":315,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:47.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 00:43:47.714: INFO: starting watch STEP: patching STEP: updating Oct 23 00:43:47.726: INFO: waiting for watch events with expected annotations Oct 23 00:43:47.726: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:47.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-6201" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":18,"skipped":329,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:42.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:43:42.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7" in namespace "projected-840" to be "Succeeded or Failed" Oct 23 00:43:42.400: INFO: Pod "downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.074431ms Oct 23 00:43:44.403: INFO: Pod "downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00591054s Oct 23 00:43:46.408: INFO: Pod "downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010836719s Oct 23 00:43:48.413: INFO: Pod "downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016586376s STEP: Saw pod success Oct 23 00:43:48.413: INFO: Pod "downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7" satisfied condition "Succeeded or Failed" Oct 23 00:43:48.416: INFO: Trying to get logs from node node2 pod downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7 container client-container: STEP: delete the pod Oct 23 00:43:48.482: INFO: Waiting for pod downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7 to disappear Oct 23 00:43:48.484: INFO: Pod downwardapi-volume-e0851264-26fd-4377-8510-4057e43f2fb7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:48.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-840" for this suite. • [SLOW TEST:6.135 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:45.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:43:45.624: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:51.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9008" for this suite. • [SLOW TEST:5.562 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":6,"skipped":148,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:51.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:43:51.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffc3a282-3ac4-4695-a84d-64ce44f7f303" in namespace "downward-api-6667" to be "Succeeded or Failed" Oct 23 00:43:51.212: INFO: Pod "downwardapi-volume-ffc3a282-3ac4-4695-a84d-64ce44f7f303": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296135ms Oct 23 00:43:53.216: INFO: Pod "downwardapi-volume-ffc3a282-3ac4-4695-a84d-64ce44f7f303": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005791225s Oct 23 00:43:55.220: INFO: Pod "downwardapi-volume-ffc3a282-3ac4-4695-a84d-64ce44f7f303": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009960236s STEP: Saw pod success Oct 23 00:43:55.220: INFO: Pod "downwardapi-volume-ffc3a282-3ac4-4695-a84d-64ce44f7f303" satisfied condition "Succeeded or Failed" Oct 23 00:43:55.222: INFO: Trying to get logs from node node1 pod downwardapi-volume-ffc3a282-3ac4-4695-a84d-64ce44f7f303 container client-container: STEP: delete the pod Oct 23 00:43:55.236: INFO: Waiting for pod downwardapi-volume-ffc3a282-3ac4-4695-a84d-64ce44f7f303 to disappear Oct 23 00:43:55.238: INFO: Pod downwardapi-volume-ffc3a282-3ac4-4695-a84d-64ce44f7f303 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:43:55.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6667" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":152,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:44.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:43:44.223: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 23 00:43:49.226: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Oct 23 00:43:53.239: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Oct 23 00:43:53.244: INFO: observed ReplicaSet test-rs in namespace replicaset-6421 with ReadyReplicas 1, AvailableReplicas 1 Oct 23 00:43:53.252: INFO: observed ReplicaSet test-rs in namespace replicaset-6421 with ReadyReplicas 1, AvailableReplicas 1 Oct 23 00:43:53.261: INFO: observed ReplicaSet test-rs in namespace replicaset-6421 with ReadyReplicas 1, AvailableReplicas 1 Oct 23 00:43:53.264: INFO: observed ReplicaSet test-rs in namespace replicaset-6421 with ReadyReplicas 1, AvailableReplicas 1 Oct 23 00:44:00.406: INFO: observed ReplicaSet test-rs in namespace replicaset-6421 with ReadyReplicas 2, AvailableReplicas 2 Oct 23 00:44:00.578: INFO: observed Replicaset test-rs in namespace replicaset-6421 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:00.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6421" for this suite. • [SLOW TEST:16.397 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:00.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:00.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8340" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:34.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 00:42:35.024083 36 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:01.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-397" for this suite. • [SLOW TEST:86.058 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:44.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6149 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6149 I1023 00:43:44.872392 31 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6149, replica count: 2 I1023 00:43:47.924125 31 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:43:50.924822 31 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:43:53.925728 31 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:43:53.925: INFO: Creating new exec pod Oct 23 00:44:02.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6149 exec execpodstnlc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 00:44:03.282: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 00:44:03.282: INFO: stdout: "externalname-service-rtctb" Oct 23 00:44:03.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6149 exec execpodstnlc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.10.169 80' Oct 23 00:44:03.525: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.10.169 80\nConnection to 10.233.10.169 80 port [tcp/http] succeeded!\n" Oct 23 00:44:03.525: INFO: stdout: "externalname-service-vc6tf" Oct 23 00:44:03.525: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:03.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6149" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:18.709 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":10,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:55.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 23 00:43:55.307: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:03.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5817" for this suite. • [SLOW TEST:8.378 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":8,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:31.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5481 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5481 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5481 Oct 23 00:42:31.929: INFO: Found 0 stateful pods, waiting for 1 Oct 23 00:42:41.933: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 23 00:42:41.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5481 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:42:42.203: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:42:42.203: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:42:42.203: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:42:42.206: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 23 00:42:52.212: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 23 00:42:52.212: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:42:52.225: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999481s Oct 23 00:42:53.228: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997002546s Oct 23 00:42:54.231: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.993517712s Oct 23 00:42:55.235: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.990314061s Oct 23 00:42:56.239: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.986458798s Oct 23 00:42:57.242: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.982742426s Oct 23 00:42:58.247: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.978599206s Oct 23 00:42:59.251: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.974128347s Oct 23 00:43:00.254: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.971430119s Oct 23 00:43:01.257: INFO: Verifying statefulset ss doesn't scale past 1 for another 967.719262ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5481 Oct 23 00:43:02.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5481 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:43:02.522: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 00:43:02.522: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:43:02.522: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:43:02.525: INFO: Found 1 stateful pods, waiting for 3 Oct 23 00:43:12.532: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:43:12.532: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:43:12.532: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 23 00:43:12.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5481 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:43:12.772: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:43:12.772: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:43:12.772: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:43:12.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5481 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:43:13.046: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:43:13.046: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:43:13.046: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:43:13.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5481 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:43:13.295: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:43:13.295: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:43:13.295: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:43:13.295: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:43:13.298: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Oct 23 00:43:23.307: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 23 00:43:23.307: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 23 00:43:23.307: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 23 00:43:23.315: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999434s Oct 23 00:43:24.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996375307s Oct 23 00:43:25.323: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992357898s Oct 23 00:43:26.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988442327s Oct 23 00:43:27.331: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985233702s Oct 23 00:43:28.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980554159s Oct 23 00:43:29.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976618878s Oct 23 00:43:30.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.972622418s Oct 23 00:43:31.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.968755637s Oct 23 00:43:32.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 964.369865ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5481 Oct 23 00:43:33.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5481 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:43:33.766: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 00:43:33.766: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:43:33.766: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:43:33.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5481 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:43:34.022: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 00:43:34.022: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:43:34.022: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:43:34.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5481 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:43:34.284: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 00:43:34.284: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:43:34.284: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:43:34.284: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 00:44:04.297: INFO: Deleting all statefulset in ns statefulset-5481 Oct 23 00:44:04.300: INFO: Scaling statefulset ss to 0 Oct 23 00:44:04.307: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:44:04.309: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:04.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5481" for this suite. • [SLOW TEST:92.431 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":6,"skipped":72,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:04.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Oct 23 00:44:04.363: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 23 00:44:09.366: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:09.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6122" for this suite. • [SLOW TEST:5.054 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":7,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:03.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:44:04.063: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:44:06.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:08.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:44:11.083: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:11.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7856" for this suite. STEP: Destroying namespace "webhook-7856-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.496 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":215,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:01.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Oct 23 00:44:01.130: INFO: The status of Pod pod-update-93dd96ed-5bf5-4e37-be5a-f0b6a416904e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:03.133: INFO: The status of Pod pod-update-93dd96ed-5bf5-4e37-be5a-f0b6a416904e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:05.134: INFO: The status of Pod pod-update-93dd96ed-5bf5-4e37-be5a-f0b6a416904e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:07.134: INFO: The status of Pod pod-update-93dd96ed-5bf5-4e37-be5a-f0b6a416904e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:09.133: INFO: The status of Pod pod-update-93dd96ed-5bf5-4e37-be5a-f0b6a416904e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:11.132: INFO: The status of Pod pod-update-93dd96ed-5bf5-4e37-be5a-f0b6a416904e is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 23 00:44:11.647: INFO: Successfully updated pod "pod-update-93dd96ed-5bf5-4e37-be5a-f0b6a416904e" STEP: verifying the updated pod is in kubernetes Oct 23 00:44:11.651: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:11.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3451" for this suite. • [SLOW TEST:10.572 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:09.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Oct 23 00:44:15.994: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-545 pod-service-account-704af26e-46a5-4b50-a12b-55934beb5730 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 23 00:44:16.251: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-545 pod-service-account-704af26e-46a5-4b50-a12b-55934beb5730 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 23 00:44:16.502: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-545 pod-service-account-704af26e-46a5-4b50-a12b-55934beb5730 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:16.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-545" for this suite. • [SLOW TEST:7.312 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":8,"skipped":96,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:22.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 23 00:43:22.947: INFO: PodSpec: initContainers in spec.initContainers Oct 23 00:44:18.937: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-74470072-d099-4969-b497-6127ca81d766", GenerateName:"", Namespace:"init-container-622", SelfLink:"", UID:"125dd492-b2ac-4de2-8c69-4e96308e815b", ResourceVersion:"59876", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770546602, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"947014868"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.254\"\n ],\n \"mac\": \"e6:89:3d:70:29:56\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.254\"\n ],\n \"mac\": \"e6:89:3d:70:29:56\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004a8e048), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004a8e060)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004a8e078), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004a8e090)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004a8e0a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004a8e0c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-hjbxm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00208c0a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hjbxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hjbxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-hjbxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003c0e0e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0018cc000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003c0e170)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003c0e190)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003c0e198), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003c0e19c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003cce020), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546602, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546602, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546602, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546602, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.207", PodIP:"10.244.3.254", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.254"}}, StartTime:(*v1.Time)(0xc004a8e0f0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0018cc0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0018cc150)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://2c3941d3b99f2d25acc56fd84c5e50d2269637a4e98ea1e7a6f2f4d86f573c7e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00208c900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00208c8a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003c0e21f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:18.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-622" for this suite. • [SLOW TEST:56.023 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":5,"skipped":67,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:03.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 23 00:44:04.175: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:44:04.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:44:06.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:08.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:10.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:12.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:14.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:16.204: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:18.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546644, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:44:21.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:21.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9051" for this suite. STEP: Destroying namespace "webhook-9051-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.600 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":9,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0} [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:00.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Oct 23 00:44:00.711: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Oct 23 00:44:00.992: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created Oct 23 00:44:03.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546641, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:05.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546641, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:07.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546641, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:09.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546641, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:11.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546641, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:13.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546641, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:15.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546641, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:17.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546641, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546640, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:44:22.638: INFO: Waited 3.605433774s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Oct 23 00:44:23.040: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:23.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2246" for this suite. • [SLOW TEST:23.250 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:11.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-kzdz8 in namespace proxy-1582 I1023 00:44:11.763910 36 runners.go:190] Created replication controller with name: proxy-service-kzdz8, namespace: proxy-1582, replica count: 1 I1023 00:44:12.815362 36 runners.go:190] proxy-service-kzdz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:44:13.816099 36 runners.go:190] proxy-service-kzdz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:44:14.817215 36 runners.go:190] proxy-service-kzdz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:44:15.819014 36 runners.go:190] proxy-service-kzdz8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:44:16.820298 36 runners.go:190] proxy-service-kzdz8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:44:16.823: INFO: setup took 5.069030007s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 23 00:44:16.826: INFO: (0) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.675813ms) Oct 23 00:44:16.826: INFO: (0) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.019821ms) Oct 23 00:44:16.828: INFO: (0) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 5.010913ms) Oct 23 00:44:16.828: INFO: (0) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 4.983673ms) Oct 23 00:44:16.835: INFO: (0) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 12.318643ms) Oct 23 00:44:16.835: INFO: (0) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 12.565368ms) Oct 23 00:44:16.835: INFO: (0) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 12.46331ms) Oct 23 00:44:16.835: INFO: (0) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 12.728711ms) Oct 23 00:44:16.836: INFO: (0) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 12.562066ms) Oct 23 00:44:16.836: INFO: (0) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 12.701987ms) Oct 23 00:44:16.836: INFO: (0) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 12.62629ms) Oct 23 00:44:16.836: INFO: (0) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 13.350318ms) Oct 23 00:44:16.836: INFO: (0) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test (200; 2.31453ms) Oct 23 00:44:16.839: INFO: (1) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 2.972876ms) Oct 23 00:44:16.840: INFO: (1) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 2.99204ms) Oct 23 00:44:16.841: INFO: (1) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 4.472959ms) Oct 23 00:44:16.841: INFO: (1) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 4.290941ms) Oct 23 00:44:16.841: INFO: (1) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 4.463932ms) Oct 23 00:44:16.842: INFO: (1) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 5.055018ms) Oct 23 00:44:16.842: INFO: (1) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 5.258717ms) Oct 23 00:44:16.842: INFO: (1) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 5.382373ms) Oct 23 00:44:16.842: INFO: (1) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 5.748662ms) Oct 23 00:44:16.843: INFO: (1) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 5.986786ms) Oct 23 00:44:16.843: INFO: (1) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 6.567533ms) Oct 23 00:44:16.846: INFO: (2) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.340186ms) Oct 23 00:44:16.846: INFO: (2) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test<... (200; 3.10544ms) Oct 23 00:44:16.847: INFO: (2) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 3.236644ms) Oct 23 00:44:16.847: INFO: (2) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 3.206544ms) Oct 23 00:44:16.847: INFO: (2) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 3.658872ms) Oct 23 00:44:16.847: INFO: (2) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.65396ms) Oct 23 00:44:16.847: INFO: (2) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 3.766931ms) Oct 23 00:44:16.847: INFO: (2) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 4.005317ms) Oct 23 00:44:16.847: INFO: (2) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 3.829522ms) Oct 23 00:44:16.848: INFO: (2) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.998339ms) Oct 23 00:44:16.848: INFO: (2) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 4.296889ms) Oct 23 00:44:16.850: INFO: (3) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.33587ms) Oct 23 00:44:16.850: INFO: (3) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test (200; 3.04342ms) Oct 23 00:44:16.851: INFO: (3) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 3.025462ms) Oct 23 00:44:16.851: INFO: (3) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 3.032347ms) Oct 23 00:44:16.851: INFO: (3) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 3.017591ms) Oct 23 00:44:16.852: INFO: (3) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 3.596864ms) Oct 23 00:44:16.852: INFO: (3) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 3.643075ms) Oct 23 00:44:16.852: INFO: (3) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 3.885959ms) Oct 23 00:44:16.852: INFO: (3) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.878722ms) Oct 23 00:44:16.852: INFO: (3) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 3.982331ms) Oct 23 00:44:16.852: INFO: (3) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 4.172004ms) Oct 23 00:44:16.852: INFO: (3) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 4.117744ms) Oct 23 00:44:16.855: INFO: (4) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 2.313325ms) Oct 23 00:44:16.855: INFO: (4) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.511047ms) Oct 23 00:44:16.855: INFO: (4) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.952543ms) Oct 23 00:44:16.855: INFO: (4) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 2.80639ms) Oct 23 00:44:16.855: INFO: (4) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.062573ms) Oct 23 00:44:16.855: INFO: (4) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 2.995359ms) Oct 23 00:44:16.856: INFO: (4) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 2.966758ms) Oct 23 00:44:16.860: INFO: (5) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 3.056798ms) Oct 23 00:44:16.860: INFO: (5) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 3.124085ms) Oct 23 00:44:16.860: INFO: (5) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 3.146227ms) Oct 23 00:44:16.860: INFO: (5) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 3.130541ms) Oct 23 00:44:16.860: INFO: (5) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 3.307951ms) Oct 23 00:44:16.860: INFO: (5) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.113751ms) Oct 23 00:44:16.860: INFO: (5) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 3.501187ms) Oct 23 00:44:16.861: INFO: (5) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.814045ms) Oct 23 00:44:16.861: INFO: (5) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.850667ms) Oct 23 00:44:16.861: INFO: (5) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 3.969256ms) Oct 23 00:44:16.864: INFO: (6) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.542866ms) Oct 23 00:44:16.864: INFO: (6) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 2.617885ms) Oct 23 00:44:16.864: INFO: (6) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 2.64768ms) Oct 23 00:44:16.864: INFO: (6) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 3.177127ms) Oct 23 00:44:16.864: INFO: (6) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.150224ms) Oct 23 00:44:16.865: INFO: (6) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 3.532958ms) Oct 23 00:44:16.865: INFO: (6) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.67283ms) Oct 23 00:44:16.865: INFO: (6) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.769373ms) Oct 23 00:44:16.865: INFO: (6) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.592567ms) Oct 23 00:44:16.865: INFO: (6) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 3.68507ms) Oct 23 00:44:16.865: INFO: (6) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.866836ms) Oct 23 00:44:16.865: INFO: (6) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 4.203858ms) Oct 23 00:44:16.868: INFO: (7) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.451052ms) Oct 23 00:44:16.868: INFO: (7) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 2.373126ms) Oct 23 00:44:16.868: INFO: (7) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.548293ms) Oct 23 00:44:16.868: INFO: (7) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.663479ms) Oct 23 00:44:16.868: INFO: (7) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 2.63327ms) Oct 23 00:44:16.868: INFO: (7) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 2.759997ms) Oct 23 00:44:16.868: INFO: (7) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 2.762729ms) Oct 23 00:44:16.869: INFO: (7) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.184599ms) Oct 23 00:44:16.869: INFO: (7) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 3.205918ms) Oct 23 00:44:16.869: INFO: (7) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 3.42334ms) Oct 23 00:44:16.869: INFO: (7) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.80559ms) Oct 23 00:44:16.869: INFO: (7) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.884659ms) Oct 23 00:44:16.872: INFO: (8) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.384975ms) Oct 23 00:44:16.872: INFO: (8) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.426001ms) Oct 23 00:44:16.872: INFO: (8) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 2.419737ms) Oct 23 00:44:16.872: INFO: (8) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 3.169391ms) Oct 23 00:44:16.873: INFO: (8) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.084524ms) Oct 23 00:44:16.873: INFO: (8) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 3.390535ms) Oct 23 00:44:16.873: INFO: (8) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 3.465074ms) Oct 23 00:44:16.873: INFO: (8) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 3.557081ms) Oct 23 00:44:16.873: INFO: (8) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 3.643068ms) Oct 23 00:44:16.873: INFO: (8) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 3.69831ms) Oct 23 00:44:16.873: INFO: (8) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.759831ms) Oct 23 00:44:16.873: INFO: (8) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.797615ms) Oct 23 00:44:16.874: INFO: (8) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 4.362524ms) Oct 23 00:44:16.876: INFO: (9) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 2.182137ms) Oct 23 00:44:16.877: INFO: (9) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 2.491013ms) Oct 23 00:44:16.877: INFO: (9) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.764916ms) Oct 23 00:44:16.877: INFO: (9) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.843533ms) Oct 23 00:44:16.877: INFO: (9) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.044013ms) Oct 23 00:44:16.877: INFO: (9) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.264512ms) Oct 23 00:44:16.877: INFO: (9) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 3.173646ms) Oct 23 00:44:16.878: INFO: (9) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 3.520266ms) Oct 23 00:44:16.878: INFO: (9) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.719535ms) Oct 23 00:44:16.878: INFO: (9) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.723822ms) Oct 23 00:44:16.878: INFO: (9) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 3.984854ms) Oct 23 00:44:16.878: INFO: (9) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 3.887904ms) Oct 23 00:44:16.881: INFO: (10) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 2.439036ms) Oct 23 00:44:16.881: INFO: (10) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 2.557487ms) Oct 23 00:44:16.881: INFO: (10) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.393149ms) Oct 23 00:44:16.881: INFO: (10) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test<... (200; 3.208537ms) Oct 23 00:44:16.882: INFO: (10) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 3.430642ms) Oct 23 00:44:16.882: INFO: (10) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 3.438651ms) Oct 23 00:44:16.882: INFO: (10) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.557327ms) Oct 23 00:44:16.882: INFO: (10) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.670954ms) Oct 23 00:44:16.882: INFO: (10) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 3.65811ms) Oct 23 00:44:16.883: INFO: (10) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 4.209748ms) Oct 23 00:44:16.883: INFO: (10) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 4.22954ms) Oct 23 00:44:16.885: INFO: (11) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.00636ms) Oct 23 00:44:16.885: INFO: (11) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 2.260555ms) Oct 23 00:44:16.885: INFO: (11) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.122139ms) Oct 23 00:44:16.885: INFO: (11) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 2.395284ms) Oct 23 00:44:16.885: INFO: (11) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 2.346854ms) Oct 23 00:44:16.885: INFO: (11) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 2.630846ms) Oct 23 00:44:16.885: INFO: (11) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.76998ms) Oct 23 00:44:16.886: INFO: (11) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.738833ms) Oct 23 00:44:16.886: INFO: (11) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 3.003579ms) Oct 23 00:44:16.886: INFO: (11) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.5627ms) Oct 23 00:44:16.886: INFO: (11) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.582356ms) Oct 23 00:44:16.887: INFO: (11) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.921153ms) Oct 23 00:44:16.887: INFO: (11) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 3.874277ms) Oct 23 00:44:16.887: INFO: (11) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 4.002172ms) Oct 23 00:44:16.887: INFO: (11) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 4.025653ms) Oct 23 00:44:16.889: INFO: (12) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test<... (200; 2.746505ms) Oct 23 00:44:16.890: INFO: (12) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.801978ms) Oct 23 00:44:16.890: INFO: (12) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 3.08557ms) Oct 23 00:44:16.890: INFO: (12) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.065153ms) Oct 23 00:44:16.890: INFO: (12) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.12906ms) Oct 23 00:44:16.890: INFO: (12) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 2.97693ms) Oct 23 00:44:16.891: INFO: (12) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 3.334198ms) Oct 23 00:44:16.891: INFO: (12) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 3.612171ms) Oct 23 00:44:16.891: INFO: (12) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 3.851305ms) Oct 23 00:44:16.891: INFO: (12) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.870267ms) Oct 23 00:44:16.891: INFO: (12) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 4.186409ms) Oct 23 00:44:16.891: INFO: (12) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 4.167002ms) Oct 23 00:44:16.891: INFO: (12) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 4.225315ms) Oct 23 00:44:16.896: INFO: (13) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 4.680958ms) Oct 23 00:44:16.896: INFO: (13) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 4.487212ms) Oct 23 00:44:16.896: INFO: (13) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 4.637278ms) Oct 23 00:44:16.896: INFO: (13) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 4.559086ms) Oct 23 00:44:16.896: INFO: (13) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 4.761856ms) Oct 23 00:44:16.896: INFO: (13) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 4.673133ms) Oct 23 00:44:16.897: INFO: (13) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 5.006199ms) Oct 23 00:44:16.897: INFO: (13) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test<... (200; 5.174538ms) Oct 23 00:44:16.897: INFO: (13) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 5.823584ms) Oct 23 00:44:16.898: INFO: (13) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 6.147872ms) Oct 23 00:44:16.898: INFO: (13) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 6.233436ms) Oct 23 00:44:16.898: INFO: (13) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 6.243349ms) Oct 23 00:44:16.898: INFO: (13) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 6.767807ms) Oct 23 00:44:16.899: INFO: (13) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 7.626179ms) Oct 23 00:44:16.904: INFO: (14) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 4.643517ms) Oct 23 00:44:16.904: INFO: (14) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 4.466022ms) Oct 23 00:44:16.904: INFO: (14) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 4.446659ms) Oct 23 00:44:16.904: INFO: (14) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 4.855547ms) Oct 23 00:44:16.904: INFO: (14) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 4.747391ms) Oct 23 00:44:16.905: INFO: (14) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 5.041457ms) Oct 23 00:44:16.909: INFO: (14) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 9.175433ms) Oct 23 00:44:16.909: INFO: (14) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 9.391581ms) Oct 23 00:44:16.909: INFO: (14) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 9.082559ms) Oct 23 00:44:16.909: INFO: (14) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 9.328795ms) Oct 23 00:44:16.909: INFO: (14) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 9.702354ms) Oct 23 00:44:16.909: INFO: (14) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 9.896634ms) Oct 23 00:44:16.909: INFO: (14) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 9.749536ms) Oct 23 00:44:16.909: INFO: (14) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 9.761584ms) Oct 23 00:44:16.914: INFO: (15) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 5.223838ms) Oct 23 00:44:16.915: INFO: (15) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 5.284185ms) Oct 23 00:44:16.915: INFO: (15) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 5.107024ms) Oct 23 00:44:16.915: INFO: (15) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test<... (200; 5.756896ms) Oct 23 00:44:16.915: INFO: (15) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 5.690224ms) Oct 23 00:44:16.915: INFO: (15) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 5.770748ms) Oct 23 00:44:16.915: INFO: (15) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 5.697095ms) Oct 23 00:44:16.915: INFO: (15) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 5.940843ms) Oct 23 00:44:16.916: INFO: (15) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 6.490491ms) Oct 23 00:44:16.916: INFO: (15) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 6.920755ms) Oct 23 00:44:16.916: INFO: (15) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 6.745868ms) Oct 23 00:44:16.916: INFO: (15) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 7.143574ms) Oct 23 00:44:16.917: INFO: (15) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 7.340541ms) Oct 23 00:44:16.917: INFO: (15) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 8.066354ms) Oct 23 00:44:16.921: INFO: (16) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 3.070065ms) Oct 23 00:44:16.921: INFO: (16) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 3.256513ms) Oct 23 00:44:16.921: INFO: (16) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test (200; 3.340244ms) Oct 23 00:44:16.921: INFO: (16) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 3.273118ms) Oct 23 00:44:16.921: INFO: (16) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.386755ms) Oct 23 00:44:16.921: INFO: (16) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 3.594746ms) Oct 23 00:44:16.922: INFO: (16) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.833638ms) Oct 23 00:44:16.922: INFO: (16) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.691929ms) Oct 23 00:44:16.922: INFO: (16) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 3.774088ms) Oct 23 00:44:16.922: INFO: (16) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 3.821786ms) Oct 23 00:44:16.922: INFO: (16) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 4.264168ms) Oct 23 00:44:16.922: INFO: (16) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 4.560099ms) Oct 23 00:44:16.922: INFO: (16) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 4.556768ms) Oct 23 00:44:16.922: INFO: (16) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 4.676565ms) Oct 23 00:44:16.923: INFO: (16) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 4.879736ms) Oct 23 00:44:16.926: INFO: (17) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 2.562272ms) Oct 23 00:44:16.926: INFO: (17) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 2.841637ms) Oct 23 00:44:16.926: INFO: (17) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.780563ms) Oct 23 00:44:16.926: INFO: (17) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 3.014981ms) Oct 23 00:44:16.926: INFO: (17) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 3.358319ms) Oct 23 00:44:16.926: INFO: (17) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 3.39197ms) Oct 23 00:44:16.927: INFO: (17) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.56663ms) Oct 23 00:44:16.927: INFO: (17) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 3.684526ms) Oct 23 00:44:16.927: INFO: (17) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 4.012889ms) Oct 23 00:44:16.927: INFO: (17) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 4.019821ms) Oct 23 00:44:16.927: INFO: (17) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 4.21053ms) Oct 23 00:44:16.927: INFO: (17) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 4.481905ms) Oct 23 00:44:16.927: INFO: (17) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 4.441837ms) Oct 23 00:44:16.927: INFO: (17) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 4.43233ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 4.162491ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:1080/proxy/: ... (200; 4.21827ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 4.249761ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 4.261357ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 4.488937ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 4.403766ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: test (200; 4.461015ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 4.620631ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 4.696495ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 4.588832ms) Oct 23 00:44:16.932: INFO: (18) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 4.702719ms) Oct 23 00:44:16.944: INFO: (18) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 16.612376ms) Oct 23 00:44:16.944: INFO: (18) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 16.531805ms) Oct 23 00:44:16.947: INFO: (19) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 2.461322ms) Oct 23 00:44:16.947: INFO: (19) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 2.283725ms) Oct 23 00:44:16.947: INFO: (19) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv/proxy/: test (200; 2.626289ms) Oct 23 00:44:16.947: INFO: (19) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:462/proxy/: tls qux (200; 3.058452ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/pods/proxy-service-kzdz8-kstqv:1080/proxy/: test<... (200; 3.411386ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:162/proxy/: bar (200; 3.530789ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/pods/http:proxy-service-kzdz8-kstqv:160/proxy/: foo (200; 3.478925ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:443/proxy/: ... (200; 3.637471ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/pods/https:proxy-service-kzdz8-kstqv:460/proxy/: tls baz (200; 3.711995ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname2/proxy/: bar (200; 3.936677ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname1/proxy/: foo (200; 3.988195ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/services/proxy-service-kzdz8:portname2/proxy/: bar (200; 4.104314ms) Oct 23 00:44:16.948: INFO: (19) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname2/proxy/: tls qux (200; 4.015254ms) Oct 23 00:44:16.949: INFO: (19) /api/v1/namespaces/proxy-1582/services/http:proxy-service-kzdz8:portname1/proxy/: foo (200; 4.272189ms) Oct 23 00:44:16.949: INFO: (19) /api/v1/namespaces/proxy-1582/services/https:proxy-service-kzdz8:tlsportname1/proxy/: tls baz (200; 4.307174ms) STEP: deleting ReplicationController proxy-service-kzdz8 in namespace proxy-1582, will wait for the garbage collector to delete the pods Oct 23 00:44:17.006: INFO: Deleting ReplicationController proxy-service-kzdz8 took: 4.581578ms Oct 23 00:44:17.108: INFO: Terminating ReplicationController proxy-service-kzdz8 pods took: 101.240039ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:24.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1582" for this suite. • [SLOW TEST:12.592 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":6,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:21.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 23 00:44:21.371: INFO: Waiting up to 5m0s for pod "pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a" in namespace "emptydir-6172" to be "Succeeded or Failed" Oct 23 00:44:21.374: INFO: Pod "pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.704493ms Oct 23 00:44:23.378: INFO: Pod "pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00641207s Oct 23 00:44:25.382: INFO: Pod "pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010513613s Oct 23 00:44:27.385: INFO: Pod "pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014237236s STEP: Saw pod success Oct 23 00:44:27.385: INFO: Pod "pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a" satisfied condition "Succeeded or Failed" Oct 23 00:44:27.388: INFO: Trying to get logs from node node2 pod pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a container test-container: STEP: delete the pod Oct 23 00:44:27.447: INFO: Waiting for pod pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a to disappear Oct 23 00:44:27.449: INFO: Pod pod-f915a9a2-7364-4198-ac8a-2a92c7716a8a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:27.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6172" for this suite. • [SLOW TEST:6.120 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":202,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:23.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:44:23.991: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-53fad185-d958-44cd-b1dd-6c05de385619" in namespace "security-context-test-2464" to be "Succeeded or Failed" Oct 23 00:44:23.993: INFO: Pod "busybox-readonly-false-53fad185-d958-44cd-b1dd-6c05de385619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041219ms Oct 23 00:44:25.997: INFO: Pod "busybox-readonly-false-53fad185-d958-44cd-b1dd-6c05de385619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006218578s Oct 23 00:44:28.006: INFO: Pod "busybox-readonly-false-53fad185-d958-44cd-b1dd-6c05de385619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014646478s Oct 23 00:44:28.006: INFO: Pod "busybox-readonly-false-53fad185-d958-44cd-b1dd-6c05de385619" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:28.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2464" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":58,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:27.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Oct 23 00:44:27.496: INFO: Waiting up to 5m0s for pod "var-expansion-fa63fdfe-18b6-46a3-8128-cbe812242a91" in namespace "var-expansion-838" to be "Succeeded or Failed" Oct 23 00:44:27.498: INFO: Pod "var-expansion-fa63fdfe-18b6-46a3-8128-cbe812242a91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170522ms Oct 23 00:44:29.501: INFO: Pod "var-expansion-fa63fdfe-18b6-46a3-8128-cbe812242a91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004997823s Oct 23 00:44:31.505: INFO: Pod "var-expansion-fa63fdfe-18b6-46a3-8128-cbe812242a91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009413341s STEP: Saw pod success Oct 23 00:44:31.505: INFO: Pod "var-expansion-fa63fdfe-18b6-46a3-8128-cbe812242a91" satisfied condition "Succeeded or Failed" Oct 23 00:44:31.508: INFO: Trying to get logs from node node2 pod var-expansion-fa63fdfe-18b6-46a3-8128-cbe812242a91 container dapi-container: STEP: delete the pod Oct 23 00:44:31.600: INFO: Waiting for pod var-expansion-fa63fdfe-18b6-46a3-8128-cbe812242a91 to disappear Oct 23 00:44:31.603: INFO: Pod var-expansion-fa63fdfe-18b6-46a3-8128-cbe812242a91 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:31.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-838" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:31.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:31.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2214" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":12,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:28.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 23 00:44:28.052: INFO: Waiting up to 5m0s for pod "pod-063185fc-53d2-4e85-b74a-dbb732b1d51f" in namespace "emptydir-4781" to be "Succeeded or Failed" Oct 23 00:44:28.055: INFO: Pod "pod-063185fc-53d2-4e85-b74a-dbb732b1d51f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.398688ms Oct 23 00:44:30.058: INFO: Pod "pod-063185fc-53d2-4e85-b74a-dbb732b1d51f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006782641s Oct 23 00:44:32.063: INFO: Pod "pod-063185fc-53d2-4e85-b74a-dbb732b1d51f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01137514s STEP: Saw pod success Oct 23 00:44:32.063: INFO: Pod "pod-063185fc-53d2-4e85-b74a-dbb732b1d51f" satisfied condition "Succeeded or Failed" Oct 23 00:44:32.066: INFO: Trying to get logs from node node2 pod pod-063185fc-53d2-4e85-b74a-dbb732b1d51f container test-container: STEP: delete the pod Oct 23 00:44:32.078: INFO: Waiting for pod pod-063185fc-53d2-4e85-b74a-dbb732b1d51f to disappear Oct 23 00:44:32.080: INFO: Pod pod-063185fc-53d2-4e85-b74a-dbb732b1d51f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:32.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4781" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:16.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:44:22.924: INFO: Deleting pod "var-expansion-f8a22052-4d9f-4758-b230-814db483fb56" in namespace "var-expansion-2547" Oct 23 00:44:22.928: INFO: Wait up to 5m0s for pod "var-expansion-f8a22052-4d9f-4758-b230-814db483fb56" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:34.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2547" for this suite. • [SLOW TEST:18.069 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":9,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:18.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:36.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2669" for this suite. • [SLOW TEST:17.060 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":6,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:11.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-hnhh STEP: Creating a pod to test atomic-volume-subpath Oct 23 00:44:11.260: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hnhh" in namespace "subpath-1443" to be "Succeeded or Failed" Oct 23 00:44:11.262: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.498338ms Oct 23 00:44:13.267: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007774203s Oct 23 00:44:15.271: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011147205s Oct 23 00:44:17.275: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 6.015187373s Oct 23 00:44:19.278: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 8.018277801s Oct 23 00:44:21.282: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 10.022089442s Oct 23 00:44:23.285: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 12.025541928s Oct 23 00:44:25.288: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 14.028832301s Oct 23 00:44:27.292: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 16.032511518s Oct 23 00:44:29.295: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 18.0357649s Oct 23 00:44:31.299: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 20.038990272s Oct 23 00:44:33.303: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 22.043807553s Oct 23 00:44:35.306: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Running", Reason="", readiness=true. Elapsed: 24.046686019s Oct 23 00:44:37.311: INFO: Pod "pod-subpath-test-projected-hnhh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.050955427s STEP: Saw pod success Oct 23 00:44:37.311: INFO: Pod "pod-subpath-test-projected-hnhh" satisfied condition "Succeeded or Failed" Oct 23 00:44:37.313: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-hnhh container test-container-subpath-projected-hnhh: STEP: delete the pod Oct 23 00:44:37.325: INFO: Waiting for pod pod-subpath-test-projected-hnhh to disappear Oct 23 00:44:37.327: INFO: Pod pod-subpath-test-projected-hnhh no longer exists STEP: Deleting pod pod-subpath-test-projected-hnhh Oct 23 00:44:37.327: INFO: Deleting pod "pod-subpath-test-projected-hnhh" in namespace "subpath-1443" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:37.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1443" for this suite. • [SLOW TEST:26.118 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:24.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:44:24.423: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 23 00:44:32.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-454 --namespace=crd-publish-openapi-454 create -f -' Oct 23 00:44:32.908: INFO: stderr: "" Oct 23 00:44:32.908: INFO: stdout: "e2e-test-crd-publish-openapi-6139-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 23 00:44:32.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-454 --namespace=crd-publish-openapi-454 delete e2e-test-crd-publish-openapi-6139-crds test-cr' Oct 23 00:44:33.083: INFO: stderr: "" Oct 23 00:44:33.083: INFO: stdout: "e2e-test-crd-publish-openapi-6139-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 23 00:44:33.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-454 --namespace=crd-publish-openapi-454 apply -f -' Oct 23 00:44:33.404: INFO: stderr: "" Oct 23 00:44:33.404: INFO: stdout: "e2e-test-crd-publish-openapi-6139-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 23 00:44:33.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-454 --namespace=crd-publish-openapi-454 delete e2e-test-crd-publish-openapi-6139-crds test-cr' Oct 23 00:44:33.580: INFO: stderr: "" Oct 23 00:44:33.580: INFO: stdout: "e2e-test-crd-publish-openapi-6139-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 23 00:44:33.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-454 explain e2e-test-crd-publish-openapi-6139-crds' Oct 23 00:44:33.933: INFO: stderr: "" Oct 23 00:44:33.933: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6139-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:37.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-454" for this suite. • [SLOW TEST:13.124 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":7,"skipped":97,"failed":0} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:37.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 00:44:37.549325 36 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 00:44:37.556: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 23 00:44:37.560: INFO: starting watch STEP: patching STEP: updating Oct 23 00:44:37.573: INFO: waiting for watch events with expected annotations Oct 23 00:44:37.573: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:37.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-846" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":8,"skipped":97,"failed":0} SS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:37.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:37.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4858" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":9,"skipped":99,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:35.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:44:35.259: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f87cfb94-f0e6-4bd2-9388-505828c88862" in namespace "security-context-test-9444" to be "Succeeded or Failed" Oct 23 00:44:35.263: INFO: Pod "alpine-nnp-false-f87cfb94-f0e6-4bd2-9388-505828c88862": Phase="Pending", Reason="", readiness=false. Elapsed: 3.725253ms Oct 23 00:44:37.267: INFO: Pod "alpine-nnp-false-f87cfb94-f0e6-4bd2-9388-505828c88862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008547151s Oct 23 00:44:39.271: INFO: Pod "alpine-nnp-false-f87cfb94-f0e6-4bd2-9388-505828c88862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012000326s Oct 23 00:44:39.271: INFO: Pod "alpine-nnp-false-f87cfb94-f0e6-4bd2-9388-505828c88862" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:39.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9444" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":291,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:36.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-1f243098-d281-4b24-9608-edb671de9c02 STEP: Creating a pod to test consume configMaps Oct 23 00:44:36.125: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db" in namespace "projected-1208" to be "Succeeded or Failed" Oct 23 00:44:36.127: INFO: Pod "pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005048ms Oct 23 00:44:38.131: INFO: Pod "pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006061081s Oct 23 00:44:40.137: INFO: Pod "pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011702705s Oct 23 00:44:42.142: INFO: Pod "pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016560873s STEP: Saw pod success Oct 23 00:44:42.142: INFO: Pod "pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db" satisfied condition "Succeeded or Failed" Oct 23 00:44:42.144: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db container agnhost-container: STEP: delete the pod Oct 23 00:44:42.155: INFO: Waiting for pod pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db to disappear Oct 23 00:44:42.157: INFO: Pod pod-projected-configmaps-b0e98cb1-df52-4d36-8f81-a76b26ead6db no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:42.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1208" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":99,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:37.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:44:37.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3" in namespace "downward-api-5873" to be "Succeeded or Failed" Oct 23 00:44:37.418: INFO: Pod "downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.455205ms Oct 23 00:44:39.422: INFO: Pod "downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008255118s Oct 23 00:44:41.427: INFO: Pod "downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012415375s Oct 23 00:44:43.432: INFO: Pod "downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017413634s STEP: Saw pod success Oct 23 00:44:43.432: INFO: Pod "downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3" satisfied condition "Succeeded or Failed" Oct 23 00:44:43.435: INFO: Trying to get logs from node node2 pod downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3 container client-container: STEP: delete the pod Oct 23 00:44:43.454: INFO: Waiting for pod downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3 to disappear Oct 23 00:44:43.456: INFO: Pod downwardapi-volume-e1e254b0-a621-4b97-b131-fded195414d3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:43.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5873" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":239,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:43.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:43.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4712" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":14,"skipped":240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:42.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Oct 23 00:44:44.228: INFO: running pods: 0 < 1 Oct 23 00:44:46.231: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:48.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8789" for this suite. • [SLOW TEST:6.086 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":8,"skipped":105,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:48.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Oct 23 00:44:48.328: INFO: Major version: 1 STEP: Confirm minor version Oct 23 00:44:48.328: INFO: cleanMinorVersion: 21 Oct 23 00:44:48.328: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:48.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-9912" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":9,"skipped":119,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:32.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 23 00:44:32.136: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:34.139: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:36.140: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:38.140: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:40.139: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 23 00:44:40.154: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:42.158: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:44.158: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:46.160: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Oct 23 00:44:46.168: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 00:44:46.170: INFO: Pod pod-with-prestop-http-hook still exists Oct 23 00:44:48.173: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 00:44:48.176: INFO: Pod pod-with-prestop-http-hook still exists Oct 23 00:44:50.172: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 00:44:50.176: INFO: Pod pod-with-prestop-http-hook still exists Oct 23 00:44:52.173: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 00:44:52.177: INFO: Pod pod-with-prestop-http-hook still exists Oct 23 00:44:54.171: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 00:44:54.175: INFO: Pod pod-with-prestop-http-hook still exists Oct 23 00:44:56.172: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 23 00:44:56.175: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:56.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4637" for this suite. • [SLOW TEST:24.101 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":65,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:43.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2928 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2928 STEP: creating replication controller externalsvc in namespace services-2928 I1023 00:44:43.640941 31 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2928, replica count: 2 I1023 00:44:46.692566 31 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:44:49.693517 31 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 23 00:44:49.707: INFO: Creating new exec pod Oct 23 00:44:53.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2928 exec execpodd24f4 -- /bin/sh -x -c nslookup clusterip-service.services-2928.svc.cluster.local' Oct 23 00:44:53.994: INFO: stderr: "+ nslookup clusterip-service.services-2928.svc.cluster.local\n" Oct 23 00:44:53.994: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-2928.svc.cluster.local\tcanonical name = externalsvc.services-2928.svc.cluster.local.\nName:\texternalsvc.services-2928.svc.cluster.local\nAddress: 10.233.17.140\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2928, will wait for the garbage collector to delete the pods Oct 23 00:44:54.053: INFO: Deleting ReplicationController externalsvc took: 5.551594ms Oct 23 00:44:54.154: INFO: Terminating ReplicationController externalsvc pods took: 100.853013ms Oct 23 00:44:58.763: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:58.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2928" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:15.172 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":15,"skipped":271,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:47.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1023 00:43:57.820003 28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:44:59.839: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:44:59.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6814" for this suite. • [SLOW TEST:72.084 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:58.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Oct 23 00:44:58.840: INFO: Waiting up to 5m0s for pod "client-containers-32f7a58e-7cd0-46f5-ac7e-126697c6e639" in namespace "containers-999" to be "Succeeded or Failed" Oct 23 00:44:58.844: INFO: Pod "client-containers-32f7a58e-7cd0-46f5-ac7e-126697c6e639": Phase="Pending", Reason="", readiness=false. Elapsed: 3.761676ms Oct 23 00:45:00.848: INFO: Pod "client-containers-32f7a58e-7cd0-46f5-ac7e-126697c6e639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008417268s Oct 23 00:45:02.856: INFO: Pod "client-containers-32f7a58e-7cd0-46f5-ac7e-126697c6e639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015983017s STEP: Saw pod success Oct 23 00:45:02.856: INFO: Pod "client-containers-32f7a58e-7cd0-46f5-ac7e-126697c6e639" satisfied condition "Succeeded or Failed" Oct 23 00:45:02.858: INFO: Trying to get logs from node node2 pod client-containers-32f7a58e-7cd0-46f5-ac7e-126697c6e639 container agnhost-container: STEP: delete the pod Oct 23 00:45:02.872: INFO: Waiting for pod client-containers-32f7a58e-7cd0-46f5-ac7e-126697c6e639 to disappear Oct 23 00:45:02.874: INFO: Pod client-containers-32f7a58e-7cd0-46f5-ac7e-126697c6e639 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:02.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-999" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":283,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:56.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 23 00:44:56.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Oct 23 00:44:56.404: INFO: stderr: "" Oct 23 00:44:56.404: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 23 00:44:56.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Oct 23 00:44:56.818: INFO: stderr: "" Oct 23 00:44:56.818: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 23 00:44:56.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6188 delete pods e2e-test-httpd-pod' Oct 23 00:45:04.588: INFO: stderr: "" Oct 23 00:45:04.588: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:04.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6188" for this suite. • [SLOW TEST:8.377 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":10,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:31.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Oct 23 00:44:56.905: INFO: EndpointSlice for Service endpointslice-6757/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:06.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-6757" for this suite. • [SLOW TEST:35.140 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":13,"skipped":273,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:06.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:45:06.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2446 version' Oct 23 00:45:07.065: INFO: stderr: "" Oct 23 00:45:07.065: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.5\", GitCommit:\"aea7bbadd2fc0cd689de94a54e5b7b758869d691\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:10:45Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:07.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2446" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":14,"skipped":277,"failed":0} SSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":19,"skipped":331,"failed":0} [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:59.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 23 00:44:59.874: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:09.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6774" for this suite. • [SLOW TEST:9.441 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":20,"skipped":331,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:07.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-cbb25174-51e3-4d91-b6d7-60691f9b131c STEP: Creating a pod to test consume secrets Oct 23 00:45:07.141: INFO: Waiting up to 5m0s for pod "pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca" in namespace "secrets-2569" to be "Succeeded or Failed" Oct 23 00:45:07.144: INFO: Pod "pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.342364ms Oct 23 00:45:09.147: INFO: Pod "pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006163069s Oct 23 00:45:11.150: INFO: Pod "pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00887264s Oct 23 00:45:13.155: INFO: Pod "pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013781526s STEP: Saw pod success Oct 23 00:45:13.155: INFO: Pod "pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca" satisfied condition "Succeeded or Failed" Oct 23 00:45:13.157: INFO: Trying to get logs from node node1 pod pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca container secret-volume-test: STEP: delete the pod Oct 23 00:45:13.169: INFO: Waiting for pod pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca to disappear Oct 23 00:45:13.171: INFO: Pod pod-secrets-56479706-12e0-4feb-aa5a-1320ccd736ca no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:13.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2569" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":291,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:09.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:45:09.716: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:45:11.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546709, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546709, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546709, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546709, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:45:14.734: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:14.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3407" for this suite. STEP: Destroying namespace "webhook-3407-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.524 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":21,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:02.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:45:02.944: INFO: Creating deployment "webserver-deployment" Oct 23 00:45:02.948: INFO: Waiting for observed generation 1 Oct 23 00:45:04.955: INFO: Waiting for all required pods to come up Oct 23 00:45:04.958: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 23 00:45:10.966: INFO: Waiting for deployment "webserver-deployment" to complete Oct 23 00:45:10.970: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 23 00:45:10.977: INFO: Updating deployment webserver-deployment Oct 23 00:45:10.977: INFO: Waiting for observed generation 2 Oct 23 00:45:12.984: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 23 00:45:12.987: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 23 00:45:12.989: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 23 00:45:12.998: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 23 00:45:12.998: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 23 00:45:13.001: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 23 00:45:13.006: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 23 00:45:13.006: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 23 00:45:13.013: INFO: Updating deployment webserver-deployment Oct 23 00:45:13.013: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 23 00:45:13.017: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 23 00:45:13.020: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 00:45:15.030: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4722 e970b3d9-a58f-41cf-996f-4436ab5d0654 61399 3 2021-10-23 00:45:02 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 00:45:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050df0c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-23 00:45:13 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-10-23 00:45:13 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 23 00:45:15.033: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-4722 72427fa9-c658-43a0-b112-4fd6d0eb45e9 61398 3 2021-10-23 00:45:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e970b3d9-a58f-41cf-996f-4436ab5d0654 0xc0050df4a7 0xc0050df4a8}] [] [{kube-controller-manager Update apps/v1 2021-10-23 00:45:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e970b3d9-a58f-41cf-996f-4436ab5d0654\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050df528 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:45:15.033: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 23 00:45:15.033: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-4722 640855d0-8647-4f88-bb42-6c89b816c264 61392 3 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e970b3d9-a58f-41cf-996f-4436ab5d0654 0xc0050df587 0xc0050df588}] [] [{kube-controller-manager Update apps/v1 2021-10-23 00:45:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e970b3d9-a58f-41cf-996f-4436ab5d0654\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050df5f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:45:15.039: INFO: Pod "webserver-deployment-795d758f88-6j6k9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-6j6k9 webserver-deployment-795d758f88- deployment-4722 d5a92dae-e219-4b41-bbde-12440270a269 61380 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545c18f 0xc00545c1a0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-r68hv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r68hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 00:45:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.040: INFO: Pod "webserver-deployment-795d758f88-7j8cc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7j8cc webserver-deployment-795d758f88- deployment-4722 7040999e-5ad5-4307-993d-9a3b5b867c73 61238 0 2021-10-23 00:45:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545c36f 0xc00545c380}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cnnh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cnnh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-23 00:45:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.040: INFO: Pod "webserver-deployment-795d758f88-fjbxh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fjbxh webserver-deployment-795d758f88- deployment-4722 dcaeda08-67cb-455f-b41f-b1f3981c6092 61355 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545c54f 0xc00545c560}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vnlzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnlzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.040: INFO: Pod "webserver-deployment-795d758f88-hkj2j" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hkj2j webserver-deployment-795d758f88- deployment-4722 cc9d9761-fee2-44da-91f6-6525488945fc 61243 0 2021-10-23 00:45:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545c6cf 0xc00545c6e0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fmclt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fmclt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-23 00:45:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.043: INFO: Pod "webserver-deployment-795d758f88-jqdmk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jqdmk webserver-deployment-795d758f88- deployment-4722 e1bf5923-9f5d-4702-a5d6-e7cfbe01d637 61295 0 2021-10-23 00:45:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545c8af 0xc00545c8c0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rlpsl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlpsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-23 00:45:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.043: INFO: Pod "webserver-deployment-795d758f88-n2qlb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n2qlb webserver-deployment-795d758f88- deployment-4722 a16831f8-5baf-4ab7-b008-d4d508578b3d 61404 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545ca8f 0xc00545caa0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vbjvd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vbjvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-23 00:45:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.044: INFO: Pod "webserver-deployment-795d758f88-nqzbp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nqzbp webserver-deployment-795d758f88- deployment-4722 0b08020a-2b60-4972-b636-5f59e34405e2 61326 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545cc6f 0xc00545cc80}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c9648,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9648,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.044: INFO: Pod "webserver-deployment-795d758f88-pm852" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pm852 webserver-deployment-795d758f88- deployment-4722 2fe0e3db-676b-4f64-b2de-89a7d627bb48 61460 0 2021-10-23 00:45:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.15" ], "mac": "66:10:e4:b4:4f:06", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.15" ], "mac": "66:10:e4:b4:4f:06", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545cdef 0xc00545ce00}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}}} {multus Update v1 2021-10-23 00:45:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}},"f:status":{"f:containerStatuses":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xphtz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xphtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-23 00:45:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.045: INFO: Pod "webserver-deployment-795d758f88-r5l7x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-r5l7x webserver-deployment-795d758f88- deployment-4722 65d20324-409a-451d-8921-ef0e1f52c7f4 61232 0 2021-10-23 00:45:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545cfef 0xc00545d000}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2zqct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zqct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-10-23 00:45:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.045: INFO: Pod "webserver-deployment-795d758f88-v9xlg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-v9xlg webserver-deployment-795d758f88- deployment-4722 f34b2024-dc61-44f2-88dc-091ddd4ae055 61347 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545d1cf 0xc00545d1e0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j7lfh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7lfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.046: INFO: Pod "webserver-deployment-795d758f88-wnk8q" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wnk8q webserver-deployment-795d758f88- deployment-4722 acbddc21-e10d-4bdf-a152-ca7c162c4e9e 61323 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545d34f 0xc00545d360}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hnjmm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnjmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.048: INFO: Pod "webserver-deployment-795d758f88-zh7pk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zh7pk webserver-deployment-795d758f88- deployment-4722 3e2e0e6c-b800-44b3-8550-bbee7b6d3e94 61375 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545d4cf 0xc00545d4e0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c2ndl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2ndl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.048: INFO: Pod "webserver-deployment-795d758f88-zw6g6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zw6g6 webserver-deployment-795d758f88- deployment-4722 8bd92028-e977-4ec3-be78-6e18e62e17bf 61350 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 72427fa9-c658-43a0-b112-4fd6d0eb45e9 0xc00545d64f 0xc00545d660}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"72427fa9-c658-43a0-b112-4fd6d0eb45e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mhsz4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mhsz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.049: INFO: Pod "webserver-deployment-847dcfb7fb-2k9g9" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2k9g9 webserver-deployment-847dcfb7fb- deployment-4722 2c66c51c-cbd7-4489-abb3-c9f851e16dc0 61357 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00545d7cf 0xc00545d7e0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s2v4m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2v4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 00:45:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.049: INFO: Pod "webserver-deployment-847dcfb7fb-56zcq" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-56zcq webserver-deployment-847dcfb7fb- deployment-4722 63fccd8c-b090-47db-bc3a-296d74e95be0 61129 0 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.12" ], "mac": "a2:b9:a3:51:57:88", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.12" ], "mac": "a2:b9:a3:51:57:88", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00545d98f 0xc00545d9a0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nlkld,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nlkld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.12,StartTime:2021-10-23 00:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://e541b88685b137210b0ef65bd676d4a95de5d817a2f3500eb9d64fa23947518b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.049: INFO: Pod "webserver-deployment-847dcfb7fb-5bgms" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5bgms webserver-deployment-847dcfb7fb- deployment-4722 eb1bae04-4f18-4ce4-af0b-876797b71c8d 61081 0 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.10" ], "mac": "6e:4b:f8:14:8b:48", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.10" ], "mac": "6e:4b:f8:14:8b:48", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00545db8f 0xc00545dba0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ctkz7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ctkz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.10,StartTime:2021-10-23 00:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://a5d242ff2de9bad76d3445ab3df6494c1a6bbc85cd5ed3f25bb6922e49678c26,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.050: INFO: Pod "webserver-deployment-847dcfb7fb-5hvx9" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5hvx9 webserver-deployment-847dcfb7fb- deployment-4722 d81073cd-641e-4ce6-a364-fef06c5105ed 61373 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00545dd8f 0xc00545dda0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bjnlw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjnlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.050: INFO: Pod "webserver-deployment-847dcfb7fb-9fcwb" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9fcwb webserver-deployment-847dcfb7fb- deployment-4722 bd77fc9a-ce76-41c5-b1e5-247411983266 61369 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00545deff 0xc00545df10}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-flxxg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-flxxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.050: INFO: Pod "webserver-deployment-847dcfb7fb-d6s5p" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-d6s5p webserver-deployment-847dcfb7fb- deployment-4722 75bd6f28-f110-415a-bd68-de214ebdc7f5 61360 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554e06f 0xc00554e080}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-65nqk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65nqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.050: INFO: Pod "webserver-deployment-847dcfb7fb-hv86k" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hv86k webserver-deployment-847dcfb7fb- deployment-4722 d7a5830a-e245-4fd7-a1de-7b4af069b4ea 61334 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554e1df 0xc00554e1f0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cq49h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq49h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.051: INFO: Pod "webserver-deployment-847dcfb7fb-j96vg" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-j96vg webserver-deployment-847dcfb7fb- deployment-4722 40c89b65-8f95-4cca-bb70-396f3416a867 61320 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554e34f 0xc00554e360}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bgvvj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgvvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 00:45:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.051: INFO: Pod "webserver-deployment-847dcfb7fb-jf4vr" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jf4vr webserver-deployment-847dcfb7fb- deployment-4722 9cc933a2-4cd4-4937-80f7-6083a3cdd1f9 61372 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554e50f 0xc00554e520}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fkd9b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkd9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 00:45:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.052: INFO: Pod "webserver-deployment-847dcfb7fb-jhhnz" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jhhnz webserver-deployment-847dcfb7fb- deployment-4722 6cb56d9a-1a7a-4e21-ab32-9dab21cff7b1 61178 0 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.25" ], "mac": "22:e8:3c:b2:a7:7d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.25" ], "mac": "22:e8:3c:b2:a7:7d", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554e6cf 0xc00554e6e0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gf277,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gf277,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.25,StartTime:2021-10-23 00:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://d17a9c353889011ba231fa7d454358eb8c4f84d1b799b05fa02939b2c6ddcfc0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.052: INFO: Pod "webserver-deployment-847dcfb7fb-jvcx9" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jvcx9 webserver-deployment-847dcfb7fb- deployment-4722 aa1c7d19-88a5-4d31-95d1-d9108b33eb9a 61136 0 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.13" ], "mac": "aa:52:e4:58:11:ba", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.13" ], "mac": "aa:52:e4:58:11:ba", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554e8cf 0xc00554e8e0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.13\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dvqw6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvqw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.13,StartTime:2021-10-23 00:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://ba53d441f87b002300762def1d62a15501bdbc2f2bea65227f7d602945f2a252,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.052: INFO: Pod "webserver-deployment-847dcfb7fb-jxwq8" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jxwq8 webserver-deployment-847dcfb7fb- deployment-4722 26a43a60-d2f5-44ec-b868-e9113e1e4fba 61327 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554eacf 0xc00554eae0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lr8f8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lr8f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.052: INFO: Pod "webserver-deployment-847dcfb7fb-lrjm4" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-lrjm4 webserver-deployment-847dcfb7fb- deployment-4722 81e82838-98c9-40e5-8506-fbc9b8f0d6c7 61096 0 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.21" ], "mac": "fa:2a:32:e2:d0:62", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.21" ], "mac": "fa:2a:32:e2:d0:62", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554ec3f 0xc00554ec50}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.21\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xtw7g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtw7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.21,StartTime:2021-10-23 00:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://cc96b4c8f53c31627f55b773705f8a6677988617abfdf66bfbb105cba724b672,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.053: INFO: Pod "webserver-deployment-847dcfb7fb-ntgmw" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ntgmw webserver-deployment-847dcfb7fb- deployment-4722 47411237-d275-4b6c-9ad0-d716689752e7 61366 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554ee3f 0xc00554ee50}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8mjdf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8mjdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.053: INFO: Pod "webserver-deployment-847dcfb7fb-pbvd6" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-pbvd6 webserver-deployment-847dcfb7fb- deployment-4722 2e0424ae-7cd6-4424-95b4-2f2bcc23e34b 61174 0 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.24" ], "mac": "96:d0:af:a6:9f:c9", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.24" ], "mac": "96:d0:af:a6:9f:c9", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554efaf 0xc00554efc0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.24\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kstbj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kstbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.24,StartTime:2021-10-23 00:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://ee4218ca623300358ad1070fdd822f1c1ab5d1ddf8d5dc9596d6d0d6e7a6cd11,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.24,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.053: INFO: Pod "webserver-deployment-847dcfb7fb-pg495" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-pg495 webserver-deployment-847dcfb7fb- deployment-4722 b0732aa6-501c-4da7-a679-422b2f8b4d02 61335 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554f1af 0xc00554f1c0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4s5cn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4s5cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 00:45:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.054: INFO: Pod "webserver-deployment-847dcfb7fb-q45cc" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-q45cc webserver-deployment-847dcfb7fb- deployment-4722 9d30dc83-2f39-48f9-87a8-d6e50a5342f6 61171 0 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.23" ], "mac": "06:38:75:f3:84:2b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.23" ], "mac": "06:38:75:f3:84:2b", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554f36f 0xc00554f380}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.23\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vzvhl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzvhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.23,StartTime:2021-10-23 00:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://d2671437c6dadb4af3e2aa5c86ee5cb1d3887039199b5738babf6f6aad3a2525,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.054: INFO: Pod "webserver-deployment-847dcfb7fb-qdpbt" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qdpbt webserver-deployment-847dcfb7fb- deployment-4722 b217ab57-4e97-4c39-877d-9091cbb7193a 61362 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554f56f 0xc00554f580}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-96wnj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96wnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.054: INFO: Pod "webserver-deployment-847dcfb7fb-wswnh" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-wswnh webserver-deployment-847dcfb7fb- deployment-4722 da8a0b63-0336-4647-9c84-d824118c7d54 61314 0 2021-10-23 00:45:13 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554f6df 0xc00554f6f0}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hqbcn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqbcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:15.054: INFO: Pod "webserver-deployment-847dcfb7fb-x6m85" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-x6m85 webserver-deployment-847dcfb7fb- deployment-4722 9c3cca4a-416b-49e7-bbe5-c3000d78c15a 61100 0 2021-10-23 00:45:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.22" ], "mac": "1e:88:2b:f4:76:35", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.22" ], "mac": "1e:88:2b:f4:76:35", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 640855d0-8647-4f88-bb42-6c89b816c264 0xc00554f84f 0xc00554f860}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"640855d0-8647-4f88-bb42-6c89b816c264\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.22\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rzgh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rzgh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.22,StartTime:2021-10-23 00:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://9d659409012b21760984f27151fe37f16f1c4ccb79b6ef64a4faa00870210750,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:15.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4722" for this suite. • [SLOW TEST:12.145 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":17,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:13.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Oct 23 00:45:13.235: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:15.239: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:17.238: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:19.238: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:21.239: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:23.240: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:25.239: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:27.239: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:29.239: INFO: The status of Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 is Running (Ready = true) Oct 23 00:45:29.244: INFO: Pod pod-hostip-c3bf7806-628f-4faf-9dcc-b8171f567541 has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:29.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3939" for this suite. • [SLOW TEST:16.054 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":301,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:29.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Oct 23 00:45:29.818: INFO: created pod pod-service-account-defaultsa Oct 23 00:45:29.818: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 23 00:45:29.827: INFO: created pod pod-service-account-mountsa Oct 23 00:45:29.827: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 23 00:45:29.837: INFO: created pod pod-service-account-nomountsa Oct 23 00:45:29.837: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 23 00:45:29.846: INFO: created pod pod-service-account-defaultsa-mountspec Oct 23 00:45:29.846: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 23 00:45:29.855: INFO: created pod pod-service-account-mountsa-mountspec Oct 23 00:45:29.855: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 23 00:45:29.864: INFO: created pod pod-service-account-nomountsa-mountspec Oct 23 00:45:29.864: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 23 00:45:29.874: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 23 00:45:29.874: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 23 00:45:29.883: INFO: created pod pod-service-account-mountsa-nomountspec Oct 23 00:45:29.883: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 23 00:45:29.891: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 23 00:45:29.891: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:29.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5866" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":17,"skipped":309,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:29.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:29.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1818" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":18,"skipped":323,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:30.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:45:30.067: INFO: Got root ca configmap in namespace "svcaccounts-7440" Oct 23 00:45:30.070: INFO: Deleted root ca configmap in namespace "svcaccounts-7440" STEP: waiting for a new root ca configmap created Oct 23 00:45:30.574: INFO: Recreated root ca configmap in namespace "svcaccounts-7440" Oct 23 00:45:30.577: INFO: Updated root ca configmap in namespace "svcaccounts-7440" STEP: waiting for the root ca configmap reconciled Oct 23 00:45:31.081: INFO: Reconciled root ca configmap in namespace "svcaccounts-7440" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:31.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7440" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":19,"skipped":330,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:15.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 23 00:45:15.392: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 23 00:45:17.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:45:19.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:45:21.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:45:23.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:45:25.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:45:27.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546715, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:45:30.411: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:45:30.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:38.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9197" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:23.474 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":18,"skipped":320,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:31.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-e0cd7e00-edf1-4859-ba99-6db512d98af5 STEP: Creating a pod to test consume configMaps Oct 23 00:45:31.139: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a" in namespace "configmap-8955" to be "Succeeded or Failed" Oct 23 00:45:31.142: INFO: Pod "pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361001ms Oct 23 00:45:33.145: INFO: Pod "pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006081532s Oct 23 00:45:35.151: INFO: Pod "pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011286397s Oct 23 00:45:37.155: INFO: Pod "pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015591999s Oct 23 00:45:39.158: INFO: Pod "pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019206853s Oct 23 00:45:41.161: INFO: Pod "pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022110914s STEP: Saw pod success Oct 23 00:45:41.161: INFO: Pod "pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a" satisfied condition "Succeeded or Failed" Oct 23 00:45:41.164: INFO: Trying to get logs from node node2 pod pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a container configmap-volume-test: STEP: delete the pod Oct 23 00:45:41.182: INFO: Waiting for pod pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a to disappear Oct 23 00:45:41.184: INFO: Pod pod-configmaps-9e596968-3d6b-496b-95f8-eb6e8343f15a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:41.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8955" for this suite. • [SLOW TEST:10.095 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":331,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:38.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-72db655d-d516-4300-a4ff-f95913d5eb5b STEP: Creating a pod to test consume configMaps Oct 23 00:45:38.661: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30d66d6c-a4fa-4018-9646-dedc2a9b6913" in namespace "projected-8490" to be "Succeeded or Failed" Oct 23 00:45:38.663: INFO: Pod "pod-projected-configmaps-30d66d6c-a4fa-4018-9646-dedc2a9b6913": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200971ms Oct 23 00:45:40.666: INFO: Pod "pod-projected-configmaps-30d66d6c-a4fa-4018-9646-dedc2a9b6913": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005019174s Oct 23 00:45:42.670: INFO: Pod "pod-projected-configmaps-30d66d6c-a4fa-4018-9646-dedc2a9b6913": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008816303s STEP: Saw pod success Oct 23 00:45:42.670: INFO: Pod "pod-projected-configmaps-30d66d6c-a4fa-4018-9646-dedc2a9b6913" satisfied condition "Succeeded or Failed" Oct 23 00:45:42.672: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-30d66d6c-a4fa-4018-9646-dedc2a9b6913 container projected-configmap-volume-test: STEP: delete the pod Oct 23 00:45:42.686: INFO: Waiting for pod pod-projected-configmaps-30d66d6c-a4fa-4018-9646-dedc2a9b6913 to disappear Oct 23 00:45:42.688: INFO: Pod pod-projected-configmaps-30d66d6c-a4fa-4018-9646-dedc2a9b6913 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:42.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8490" for this suite. • ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:14.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 23 00:45:14.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6655 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Oct 23 00:45:15.055: INFO: stderr: "" Oct 23 00:45:15.055: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 23 00:45:35.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6655 get pod e2e-test-httpd-pod -o json' Oct 23 00:45:35.279: INFO: stderr: "" Oct 23 00:45:35.279: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.35\\\"\\n ],\\n \\\"mac\\\": \\\"4a:5b:fe:14:7f:4a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.3.35\\\"\\n ],\\n \\\"mac\\\": \\\"4a:5b:fe:14:7f:4a\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2021-10-23T00:45:15Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6655\",\n \"resourceVersion\": \"61945\",\n \"uid\": \"71185dda-5872-4b59-9942-74e6dd60f2bf\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-d4f5x\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node1\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-d4f5x\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-23T00:45:15Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-23T00:45:33Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-23T00:45:33Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-10-23T00:45:15Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://3e1ff1e44c72c4001b6c3edf8bd271271ce740a62866c830db1af97cd6eb706b\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-10-23T00:45:32Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.207\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.3.35\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.3.35\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-10-23T00:45:15Z\"\n }\n}\n" STEP: replace the image in the pod Oct 23 00:45:35.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6655 replace -f -' Oct 23 00:45:35.652: INFO: stderr: "" Oct 23 00:45:35.652: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Oct 23 00:45:35.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6655 delete pods e2e-test-httpd-pod' Oct 23 00:45:43.873: INFO: stderr: "" Oct 23 00:45:43.873: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:43.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6655" for this suite. • [SLOW TEST:29.011 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":22,"skipped":354,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:48.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:48.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6971" for this suite. • [SLOW TEST:60.053 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":123,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:04.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 23 00:45:04.681: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 23 00:45:22.905: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:45:31.462: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:51.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1384" for this suite. • [SLOW TEST:46.378 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":11,"skipped":94,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:51.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-31755849-3635-4e35-ad2b-310bf1f9a209 STEP: Creating a pod to test consume configMaps Oct 23 00:45:51.105: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c9ac567-a598-4016-bb6a-c0698097feb9" in namespace "projected-6358" to be "Succeeded or Failed" Oct 23 00:45:51.107: INFO: Pod "pod-projected-configmaps-3c9ac567-a598-4016-bb6a-c0698097feb9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.959909ms Oct 23 00:45:53.110: INFO: Pod "pod-projected-configmaps-3c9ac567-a598-4016-bb6a-c0698097feb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005749059s Oct 23 00:45:55.115: INFO: Pod "pod-projected-configmaps-3c9ac567-a598-4016-bb6a-c0698097feb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010347553s STEP: Saw pod success Oct 23 00:45:55.115: INFO: Pod "pod-projected-configmaps-3c9ac567-a598-4016-bb6a-c0698097feb9" satisfied condition "Succeeded or Failed" Oct 23 00:45:55.117: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-3c9ac567-a598-4016-bb6a-c0698097feb9 container agnhost-container: STEP: delete the pod Oct 23 00:45:55.131: INFO: Waiting for pod pod-projected-configmaps-3c9ac567-a598-4016-bb6a-c0698097feb9 to disappear Oct 23 00:45:55.133: INFO: Pod pod-projected-configmaps-3c9ac567-a598-4016-bb6a-c0698097feb9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:55.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6358" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":106,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:48.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-8667ccfe-a140-4e62-829d-fe09f71f04fb STEP: Creating a pod to test consume secrets Oct 23 00:45:48.451: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058" in namespace "projected-6852" to be "Succeeded or Failed" Oct 23 00:45:48.453: INFO: Pod "pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432238ms Oct 23 00:45:50.456: INFO: Pod "pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00548639s Oct 23 00:45:52.459: INFO: Pod "pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008757158s Oct 23 00:45:54.465: INFO: Pod "pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014652825s Oct 23 00:45:56.472: INFO: Pod "pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021762571s STEP: Saw pod success Oct 23 00:45:56.472: INFO: Pod "pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058" satisfied condition "Succeeded or Failed" Oct 23 00:45:56.475: INFO: Trying to get logs from node node2 pod pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058 container projected-secret-volume-test: STEP: delete the pod Oct 23 00:45:56.489: INFO: Waiting for pod pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058 to disappear Oct 23 00:45:56.491: INFO: Pod pod-projected-secrets-a14eb120-5524-40d5-86fe-2f307d730058 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:56.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6852" for this suite. • [SLOW TEST:8.089 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":126,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:56.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 00:45:56.548: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 23 00:45:56.550: INFO: starting watch STEP: patching STEP: updating Oct 23 00:45:56.560: INFO: waiting for watch events with expected annotations Oct 23 00:45:56.560: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:56.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-8103" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":12,"skipped":128,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:43.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-7667 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7667 STEP: Deleting pre-stop pod Oct 23 00:45:58.964: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:58.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7667" for this suite. • [SLOW TEST:15.082 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":23,"skipped":361,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:55.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 23 00:45:55.200: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-4517 8ad33504-9962-4e60-9511-0399579d1c95 62388 0 2021-10-23 00:45:55 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-10-23 00:45:55 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nvht7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nvht7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:45:55.203: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:57.207: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:45:59.207: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 23 00:45:59.207: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4517 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:45:59.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Oct 23 00:45:59.303: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4517 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:45:59.303: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:45:59.419: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:45:59.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4517" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":13,"skipped":114,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:59.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:45:59.467: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:05.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-393" for this suite. • [SLOW TEST:6.048 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":14,"skipped":117,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":329,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:42.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:45:42.729: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 23 00:45:47.734: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 23 00:45:47.734: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 23 00:45:49.739: INFO: Creating deployment "test-rollover-deployment" Oct 23 00:45:49.747: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 23 00:45:51.754: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 23 00:45:51.760: INFO: Ensure that both replica sets have 1 created replica Oct 23 00:45:51.765: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 23 00:45:51.772: INFO: Updating deployment test-rollover-deployment Oct 23 00:45:51.772: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 23 00:45:53.779: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 23 00:45:53.787: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 23 00:45:53.792: INFO: all replica sets need to contain the pod-template-hash label Oct 23 00:45:53.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546751, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:45:55.798: INFO: all replica sets need to contain the pod-template-hash label Oct 23 00:45:55.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546755, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:45:57.801: INFO: all replica sets need to contain the pod-template-hash label Oct 23 00:45:57.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546755, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:45:59.798: INFO: all replica sets need to contain the pod-template-hash label Oct 23 00:45:59.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546755, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:46:01.800: INFO: all replica sets need to contain the pod-template-hash label Oct 23 00:46:01.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546755, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:46:03.798: INFO: all replica sets need to contain the pod-template-hash label Oct 23 00:46:03.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546755, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546749, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:46:05.799: INFO: Oct 23 00:46:05.799: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 00:46:05.807: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1644 8f205eca-be80-4d93-be91-fe068ea2dc93 62582 2 2021-10-23 00:45:49 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-23 00:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 00:46:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b51388 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-23 00:45:49 +0000 UTC,LastTransitionTime:2021-10-23 00:45:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-10-23 00:46:05 +0000 UTC,LastTransitionTime:2021-10-23 00:45:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 23 00:46:05.811: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-1644 1e9e59ff-76c3-4ba2-ba6c-74690a3ed1c0 62572 2 2021-10-23 00:45:51 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 8f205eca-be80-4d93-be91-fe068ea2dc93 0xc002b51900 0xc002b51901}] [] [{kube-controller-manager Update apps/v1 2021-10-23 00:46:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f205eca-be80-4d93-be91-fe068ea2dc93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b51978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:46:05.811: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 23 00:46:05.811: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1644 d2764b66-24fc-4bbe-81ef-4e0af595a032 62581 2 2021-10-23 00:45:42 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 8f205eca-be80-4d93-be91-fe068ea2dc93 0xc002b516f7 0xc002b516f8}] [] [{e2e.test Update apps/v1 2021-10-23 00:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 00:46:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f205eca-be80-4d93-be91-fe068ea2dc93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b51798 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:46:05.811: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-1644 89d190f1-324e-4074-ac04-9ba8a5c0ac77 62330 2 2021-10-23 00:45:49 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 8f205eca-be80-4d93-be91-fe068ea2dc93 0xc002b51807 0xc002b51808}] [] [{kube-controller-manager Update apps/v1 2021-10-23 00:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f205eca-be80-4d93-be91-fe068ea2dc93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b51898 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:46:05.815: INFO: Pod "test-rollover-deployment-98c5f4599-qlv59" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-qlv59 test-rollover-deployment-98c5f4599- deployment-1644 10a44249-2d67-486b-b727-fddb75ce846e 62394 0 2021-10-23 00:45:51 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.32" ], "mac": "62:d5:5d:06:14:73", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.32" ], "mac": "62:d5:5d:06:14:73", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 1e9e59ff-76c3-4ba2-ba6c-74690a3ed1c0 0xc002b51e6f 0xc002b51e80}] [] [{kube-controller-manager Update v1 2021-10-23 00:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e9e59ff-76c3-4ba2-ba6c-74690a3ed1c0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:45:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:45:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.32\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j62rb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j62rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:45:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.32,StartTime:2021-10-23 00:45:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:45:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://817eab4d901a69fd0576527a6b94347d1711113768e8bf8ff485c1d199445121,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:05.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1644" for this suite. • [SLOW TEST:23.124 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":20,"skipped":329,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:37.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-942dbe7b-6055-46b5-a60a-678969002d03 STEP: Creating secret with name s-test-opt-upd-22a74bd2-3038-40cf-a492-73aa1f3d09c4 STEP: Creating the pod Oct 23 00:44:37.721: INFO: The status of Pod pod-secrets-9ac43951-9e00-4501-9aa2-2cc6ee77698e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:39.723: INFO: The status of Pod pod-secrets-9ac43951-9e00-4501-9aa2-2cc6ee77698e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:41.725: INFO: The status of Pod pod-secrets-9ac43951-9e00-4501-9aa2-2cc6ee77698e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:43.724: INFO: The status of Pod pod-secrets-9ac43951-9e00-4501-9aa2-2cc6ee77698e is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:44:45.725: INFO: The status of Pod pod-secrets-9ac43951-9e00-4501-9aa2-2cc6ee77698e is Running (Ready = true) STEP: Deleting secret s-test-opt-del-942dbe7b-6055-46b5-a60a-678969002d03 STEP: Updating secret s-test-opt-upd-22a74bd2-3038-40cf-a492-73aa1f3d09c4 STEP: Creating secret with name s-test-opt-create-5911655f-eee3-4952-9961-f56e1094e8ab STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:07.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4796" for this suite. • [SLOW TEST:89.406 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":102,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:05.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-bbab4e29-c322-495f-aa2e-28921f10205c STEP: Creating a pod to test consume configMaps Oct 23 00:46:05.568: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7bf3613a-9a8c-475a-84d7-baee87a283e7" in namespace "projected-3287" to be "Succeeded or Failed" Oct 23 00:46:05.571: INFO: Pod "pod-projected-configmaps-7bf3613a-9a8c-475a-84d7-baee87a283e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.631518ms Oct 23 00:46:07.574: INFO: Pod "pod-projected-configmaps-7bf3613a-9a8c-475a-84d7-baee87a283e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006001545s Oct 23 00:46:09.580: INFO: Pod "pod-projected-configmaps-7bf3613a-9a8c-475a-84d7-baee87a283e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011549346s STEP: Saw pod success Oct 23 00:46:09.580: INFO: Pod "pod-projected-configmaps-7bf3613a-9a8c-475a-84d7-baee87a283e7" satisfied condition "Succeeded or Failed" Oct 23 00:46:09.582: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-7bf3613a-9a8c-475a-84d7-baee87a283e7 container agnhost-container: STEP: delete the pod Oct 23 00:46:09.596: INFO: Waiting for pod pod-projected-configmaps-7bf3613a-9a8c-475a-84d7-baee87a283e7 to disappear Oct 23 00:46:09.598: INFO: Pod pod-projected-configmaps-7bf3613a-9a8c-475a-84d7-baee87a283e7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:09.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3287" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":135,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:09.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:11.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-7492" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":16,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:41.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:45:41.242: INFO: created pod Oct 23 00:45:41.242: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6869" to be "Succeeded or Failed" Oct 23 00:45:41.244: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206678ms Oct 23 00:45:43.247: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005397836s Oct 23 00:45:45.250: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008449304s Oct 23 00:45:47.254: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01234447s STEP: Saw pod success Oct 23 00:45:47.254: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Oct 23 00:46:17.256: INFO: polling logs Oct 23 00:46:17.262: INFO: Pod logs: 2021/10/23 00:45:44 OK: Got token 2021/10/23 00:45:44 validating with in-cluster discovery 2021/10/23 00:45:44 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/10/23 00:45:44 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6869:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1634950541, NotBefore:1634949941, IssuedAt:1634949941, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6869", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"302aa40e-3a5e-4991-98e8-202d1d0c1cec"}}} 2021/10/23 00:45:44 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/10/23 00:45:44 OK: Validated signature on JWT 2021/10/23 00:45:44 OK: Got valid claims from token! 2021/10/23 00:45:44 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6869:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1634950541, NotBefore:1634949941, IssuedAt:1634949941, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6869", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"302aa40e-3a5e-4991-98e8-202d1d0c1cec"}}} Oct 23 00:46:17.262: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:17.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6869" for this suite. • [SLOW TEST:36.073 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":21,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:06.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-a95ee08e-1884-4704-ae22-b6c9ffca1822 in namespace container-probe-802 Oct 23 00:42:18.749: INFO: Started pod busybox-a95ee08e-1884-4704-ae22-b6c9ffca1822 in namespace container-probe-802 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 00:42:18.752: INFO: Initial restart count of pod busybox-a95ee08e-1884-4704-ae22-b6c9ffca1822 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:19.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-802" for this suite. • [SLOW TEST:252.589 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:05.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:46:06.191: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:46:08.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546766, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546766, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546766, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546766, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:46:11.223: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:46:11.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9302-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:19.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3049" for this suite. STEP: Destroying namespace "webhook-3049-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.558 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":21,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:43:48.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-b2874ca7-bb90-444e-b7de-a25299f60134 in namespace container-probe-2422 Oct 23 00:43:58.581: INFO: Started pod liveness-b2874ca7-bb90-444e-b7de-a25299f60134 in namespace container-probe-2422 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 00:43:58.584: INFO: Initial restart count of pod liveness-b2874ca7-bb90-444e-b7de-a25299f60134 is 0 Oct 23 00:44:12.619: INFO: Restart count of pod container-probe-2422/liveness-b2874ca7-bb90-444e-b7de-a25299f60134 is now 1 (14.034928788s elapsed) Oct 23 00:44:32.657: INFO: Restart count of pod container-probe-2422/liveness-b2874ca7-bb90-444e-b7de-a25299f60134 is now 2 (34.072421421s elapsed) Oct 23 00:44:52.698: INFO: Restart count of pod container-probe-2422/liveness-b2874ca7-bb90-444e-b7de-a25299f60134 is now 3 (54.113016159s elapsed) Oct 23 00:45:10.729: INFO: Restart count of pod container-probe-2422/liveness-b2874ca7-bb90-444e-b7de-a25299f60134 is now 4 (1m12.144359046s elapsed) Oct 23 00:46:22.864: INFO: Restart count of pod container-probe-2422/liveness-b2874ca7-bb90-444e-b7de-a25299f60134 is now 5 (2m24.279360635s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:22.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2422" for this suite. • [SLOW TEST:154.341 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:17.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-89fd02b6-04f6-42e1-9e76-8b0e129dcd69 STEP: Creating a pod to test consume secrets Oct 23 00:46:17.360: INFO: Waiting up to 5m0s for pod "pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5" in namespace "secrets-9854" to be "Succeeded or Failed" Oct 23 00:46:17.362: INFO: Pod "pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125635ms Oct 23 00:46:19.368: INFO: Pod "pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007608803s Oct 23 00:46:21.374: INFO: Pod "pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013936351s Oct 23 00:46:23.378: INFO: Pod "pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018421976s STEP: Saw pod success Oct 23 00:46:23.378: INFO: Pod "pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5" satisfied condition "Succeeded or Failed" Oct 23 00:46:23.381: INFO: Trying to get logs from node node2 pod pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5 container secret-volume-test: STEP: delete the pod Oct 23 00:46:23.394: INFO: Waiting for pod pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5 to disappear Oct 23 00:46:23.396: INFO: Pod pod-secrets-fc92b198-d361-447e-aafb-d35f41579eb5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:23.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9854" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:19.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:19.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-6247 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:25.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-3436" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:25.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6247" for this suite. • [SLOW TEST:6.106 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":22,"skipped":383,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:19.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:46:19.357: INFO: The status of Pod busybox-host-aliases7d41ddad-fc36-49b6-9fa2-574f1bf02355 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:21.360: INFO: The status of Pod busybox-host-aliases7d41ddad-fc36-49b6-9fa2-574f1bf02355 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:23.361: INFO: The status of Pod busybox-host-aliases7d41ddad-fc36-49b6-9fa2-574f1bf02355 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:25.360: INFO: The status of Pod busybox-host-aliases7d41ddad-fc36-49b6-9fa2-574f1bf02355 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:27.362: INFO: The status of Pod busybox-host-aliases7d41ddad-fc36-49b6-9fa2-574f1bf02355 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:28.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9445" for this suite. • [SLOW TEST:8.753 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:59.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-2443 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 23 00:45:59.043: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 23 00:45:59.082: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:01.085: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:03.087: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:46:05.086: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:46:07.085: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:46:09.086: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:46:11.085: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:46:13.087: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:46:15.086: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:46:17.086: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:46:19.086: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 23 00:46:19.090: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 23 00:46:21.094: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 23 00:46:29.119: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 23 00:46:29.119: INFO: Breadth first check of 10.244.3.40 on host 10.10.190.207... Oct 23 00:46:29.121: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.43:9080/dial?request=hostname&protocol=udp&host=10.244.3.40&port=8081&tries=1'] Namespace:pod-network-test-2443 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:46:29.121: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:46:29.583: INFO: Waiting for responses: map[] Oct 23 00:46:29.583: INFO: reached 10.244.3.40 after 0/1 tries Oct 23 00:46:29.583: INFO: Breadth first check of 10.244.4.34 on host 10.10.190.208... Oct 23 00:46:29.586: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.43:9080/dial?request=hostname&protocol=udp&host=10.244.4.34&port=8081&tries=1'] Namespace:pod-network-test-2443 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:46:29.586: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:46:29.737: INFO: Waiting for responses: map[] Oct 23 00:46:29.737: INFO: reached 10.244.4.34 after 0/1 tries Oct 23 00:46:29.737: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:29.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2443" for this suite. • [SLOW TEST:30.724 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":379,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:25.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 23 00:46:25.681: INFO: Waiting up to 5m0s for pod "pod-26c02d39-4004-4da3-bfa8-846eb394c076" in namespace "emptydir-3848" to be "Succeeded or Failed" Oct 23 00:46:25.683: INFO: Pod "pod-26c02d39-4004-4da3-bfa8-846eb394c076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077875ms Oct 23 00:46:27.687: INFO: Pod "pod-26c02d39-4004-4da3-bfa8-846eb394c076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006082448s Oct 23 00:46:29.690: INFO: Pod "pod-26c02d39-4004-4da3-bfa8-846eb394c076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009091174s STEP: Saw pod success Oct 23 00:46:29.690: INFO: Pod "pod-26c02d39-4004-4da3-bfa8-846eb394c076" satisfied condition "Succeeded or Failed" Oct 23 00:46:29.692: INFO: Trying to get logs from node node2 pod pod-26c02d39-4004-4da3-bfa8-846eb394c076 container test-container: STEP: delete the pod Oct 23 00:46:29.765: INFO: Waiting for pod pod-26c02d39-4004-4da3-bfa8-846eb394c076 to disappear Oct 23 00:46:29.768: INFO: Pod pod-26c02d39-4004-4da3-bfa8-846eb394c076 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:29.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3848" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":355,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:23.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:46:23.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880" in namespace "downward-api-9602" to be "Succeeded or Failed" Oct 23 00:46:23.439: INFO: Pod "downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880": Phase="Pending", Reason="", readiness=false. Elapsed: 1.891304ms Oct 23 00:46:25.442: INFO: Pod "downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00546786s Oct 23 00:46:27.446: INFO: Pod "downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009369518s Oct 23 00:46:29.450: INFO: Pod "downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013075743s Oct 23 00:46:31.455: INFO: Pod "downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018705343s STEP: Saw pod success Oct 23 00:46:31.455: INFO: Pod "downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880" satisfied condition "Succeeded or Failed" Oct 23 00:46:31.458: INFO: Trying to get logs from node node1 pod downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880 container client-container: STEP: delete the pod Oct 23 00:46:31.470: INFO: Waiting for pod downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880 to disappear Oct 23 00:46:31.473: INFO: Pod downwardapi-volume-be55f6b5-bfdc-4024-9987-5db75d857880 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:31.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9602" for this suite. • [SLOW TEST:8.074 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":355,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:42:30.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-ebae2542-816d-4d58-855e-e1119ed2bc3d in namespace container-probe-2562 Oct 23 00:42:34.991: INFO: Started pod test-webserver-ebae2542-816d-4d58-855e-e1119ed2bc3d in namespace container-probe-2562 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 00:42:34.994: INFO: Initial restart count of pod test-webserver-ebae2542-816d-4d58-855e-e1119ed2bc3d is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:35.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2562" for this suite. • [SLOW TEST:244.636 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":139,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:31.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-fcef0aec-3c78-4e9c-87be-85a3287f0cec STEP: Creating secret with name s-test-opt-upd-e32fd305-07aa-41ff-b0df-885da24374c9 STEP: Creating the pod Oct 23 00:46:31.545: INFO: The status of Pod pod-projected-secrets-8f33f5a1-df7f-42a0-95d3-bd3083036aea is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:33.550: INFO: The status of Pod pod-projected-secrets-8f33f5a1-df7f-42a0-95d3-bd3083036aea is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:35.549: INFO: The status of Pod pod-projected-secrets-8f33f5a1-df7f-42a0-95d3-bd3083036aea is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:37.549: INFO: The status of Pod pod-projected-secrets-8f33f5a1-df7f-42a0-95d3-bd3083036aea is Running (Ready = true) STEP: Deleting secret s-test-opt-del-fcef0aec-3c78-4e9c-87be-85a3287f0cec STEP: Updating secret s-test-opt-upd-e32fd305-07aa-41ff-b0df-885da24374c9 STEP: Creating secret with name s-test-opt-create-5354e7b9-93ce-45c3-904f-fc3d12ec3d9a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:41.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6869" for this suite. • [SLOW TEST:10.132 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":362,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:28.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 23 00:46:28.795: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 23 00:46:30.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546788, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546788, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546788, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546788, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:46:33.818: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:46:33.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:41.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3047" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.743 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:29.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 23 00:46:30.426: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:46:30.438: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:46:32.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:46:34.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:46:37.458: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:46:37.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5946-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:45.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3222" for this suite. STEP: Destroying namespace "webhook-3222-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.743 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":24,"skipped":416,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:45.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 23 00:46:45.615: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5191 b2265905-d906-47f2-8d05-c35d5775729a 63575 0 2021-10-23 00:46:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-23 00:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:46:45.615: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5191 b2265905-d906-47f2-8d05-c35d5775729a 63576 0 2021-10-23 00:46:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-10-23 00:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:45.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5191" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":25,"skipped":420,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:45.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Oct 23 00:46:45.677: INFO: created test-podtemplate-1 Oct 23 00:46:45.680: INFO: created test-podtemplate-2 Oct 23 00:46:45.683: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 23 00:46:45.685: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 23 00:46:45.696: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:45.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-6553" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":26,"skipped":431,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:41.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 23 00:46:41.683: INFO: Waiting up to 5m0s for pod "pod-a8c522f7-c184-4ea6-b0e6-fdfd4fb32b11" in namespace "emptydir-4956" to be "Succeeded or Failed" Oct 23 00:46:41.685: INFO: Pod "pod-a8c522f7-c184-4ea6-b0e6-fdfd4fb32b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171226ms Oct 23 00:46:43.689: INFO: Pod "pod-a8c522f7-c184-4ea6-b0e6-fdfd4fb32b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005583113s Oct 23 00:46:45.691: INFO: Pod "pod-a8c522f7-c184-4ea6-b0e6-fdfd4fb32b11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008077581s STEP: Saw pod success Oct 23 00:46:45.691: INFO: Pod "pod-a8c522f7-c184-4ea6-b0e6-fdfd4fb32b11" satisfied condition "Succeeded or Failed" Oct 23 00:46:45.693: INFO: Trying to get logs from node node2 pod pod-a8c522f7-c184-4ea6-b0e6-fdfd4fb32b11 container test-container: STEP: delete the pod Oct 23 00:46:45.733: INFO: Waiting for pod pod-a8c522f7-c184-4ea6-b0e6-fdfd4fb32b11 to disappear Oct 23 00:46:45.735: INFO: Pod pod-a8c522f7-c184-4ea6-b0e6-fdfd4fb32b11 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:45.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4956" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":4,"skipped":93,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:41.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-9d9115b4-e13d-4f2b-ba4d-f711cc0b09df STEP: Creating a pod to test consume configMaps Oct 23 00:46:41.993: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a701c9f6-dadd-4ed7-a9b6-7d5e4ac46c05" in namespace "projected-8927" to be "Succeeded or Failed" Oct 23 00:46:41.995: INFO: Pod "pod-projected-configmaps-a701c9f6-dadd-4ed7-a9b6-7d5e4ac46c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098742ms Oct 23 00:46:44.000: INFO: Pod "pod-projected-configmaps-a701c9f6-dadd-4ed7-a9b6-7d5e4ac46c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00658704s Oct 23 00:46:46.003: INFO: Pod "pod-projected-configmaps-a701c9f6-dadd-4ed7-a9b6-7d5e4ac46c05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009834724s STEP: Saw pod success Oct 23 00:46:46.003: INFO: Pod "pod-projected-configmaps-a701c9f6-dadd-4ed7-a9b6-7d5e4ac46c05" satisfied condition "Succeeded or Failed" Oct 23 00:46:46.005: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-a701c9f6-dadd-4ed7-a9b6-7d5e4ac46c05 container agnhost-container: STEP: delete the pod Oct 23 00:46:46.126: INFO: Waiting for pod pod-projected-configmaps-a701c9f6-dadd-4ed7-a9b6-7d5e4ac46c05 to disappear Oct 23 00:46:46.129: INFO: Pod pod-projected-configmaps-a701c9f6-dadd-4ed7-a9b6-7d5e4ac46c05 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:46.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8927" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:35.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:46.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5836" for this suite. • [SLOW TEST:11.057 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":4,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:29.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:46:30.247: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:46:32.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546790, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:46:35.269: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:48.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6003" for this suite. STEP: Destroying namespace "webhook-6003-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.550 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":25,"skipped":417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:45.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:46:45.815: INFO: The status of Pod busybox-readonly-fs169aa734-7ca6-4ac2-8aca-3c0953135083 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:47.818: INFO: The status of Pod busybox-readonly-fs169aa734-7ca6-4ac2-8aca-3c0953135083 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:46:49.818: INFO: The status of Pod busybox-readonly-fs169aa734-7ca6-4ac2-8aca-3c0953135083 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:49.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7108" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":460,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:48.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 23 00:46:48.517: INFO: Waiting up to 5m0s for pod "security-context-8075765d-ddec-4bcf-9bac-f8239d61092e" in namespace "security-context-8422" to be "Succeeded or Failed" Oct 23 00:46:48.519: INFO: Pod "security-context-8075765d-ddec-4bcf-9bac-f8239d61092e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122389ms Oct 23 00:46:50.522: INFO: Pod "security-context-8075765d-ddec-4bcf-9bac-f8239d61092e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005458578s Oct 23 00:46:52.526: INFO: Pod "security-context-8075765d-ddec-4bcf-9bac-f8239d61092e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009654536s Oct 23 00:46:54.529: INFO: Pod "security-context-8075765d-ddec-4bcf-9bac-f8239d61092e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012035682s STEP: Saw pod success Oct 23 00:46:54.529: INFO: Pod "security-context-8075765d-ddec-4bcf-9bac-f8239d61092e" satisfied condition "Succeeded or Failed" Oct 23 00:46:54.531: INFO: Trying to get logs from node node1 pod security-context-8075765d-ddec-4bcf-9bac-f8239d61092e container test-container: STEP: delete the pod Oct 23 00:46:54.544: INFO: Waiting for pod security-context-8075765d-ddec-4bcf-9bac-f8239d61092e to disappear Oct 23 00:46:54.546: INFO: Pod security-context-8075765d-ddec-4bcf-9bac-f8239d61092e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:54.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8422" for this suite. • [SLOW TEST:6.072 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":93,"failed":0} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:46.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 23 00:46:46.177: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1774 8cd4b521-fd41-46f9-9957-78587cfc6d96 63622 0 2021-10-23 00:46:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 00:46:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:46:46.177: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1774 8cd4b521-fd41-46f9-9957-78587cfc6d96 63623 0 2021-10-23 00:46:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 00:46:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:46:46.177: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1774 8cd4b521-fd41-46f9-9957-78587cfc6d96 63624 0 2021-10-23 00:46:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 00:46:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 23 00:46:56.197: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1774 8cd4b521-fd41-46f9-9957-78587cfc6d96 63940 0 2021-10-23 00:46:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 00:46:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:46:56.198: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1774 8cd4b521-fd41-46f9-9957-78587cfc6d96 63941 0 2021-10-23 00:46:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 00:46:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:46:56.198: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1774 8cd4b521-fd41-46f9-9957-78587cfc6d96 63942 0 2021-10-23 00:46:46 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-10-23 00:46:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:56.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1774" for this suite. • [SLOW TEST:10.067 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":6,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:56.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Oct 23 00:46:56.366: INFO: observed Pod pod-test in namespace pods-7219 in phase Pending with labels: map[test-pod-static:true] & conditions [] Oct 23 00:46:56.368: INFO: observed Pod pod-test in namespace pods-7219 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC }] Oct 23 00:46:56.378: INFO: observed Pod pod-test in namespace pods-7219 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC }] Oct 23 00:46:57.887: INFO: observed Pod pod-test in namespace pods-7219 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC }] Oct 23 00:46:58.854: INFO: Found Pod pod-test in namespace pods-7219 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:46:56 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Oct 23 00:46:58.865: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Oct 23 00:46:58.883: INFO: observed event type ADDED Oct 23 00:46:58.883: INFO: observed event type MODIFIED Oct 23 00:46:58.883: INFO: observed event type MODIFIED Oct 23 00:46:58.884: INFO: observed event type MODIFIED Oct 23 00:46:58.884: INFO: observed event type MODIFIED Oct 23 00:46:58.884: INFO: observed event type MODIFIED Oct 23 00:46:58.884: INFO: observed event type MODIFIED Oct 23 00:46:58.884: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:46:58.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7219" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":7,"skipped":143,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:54.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:46:55.231: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:46:57.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546815, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546815, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546815, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546815, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:47:00.252: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:00.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5017" for this suite. STEP: Destroying namespace "webhook-5017-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.663 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":27,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:45.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:46:46.224: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:46:48.234: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546806, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546806, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546806, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546806, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:46:50.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546806, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546806, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546806, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546806, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:46:53.246: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:46:53.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:01.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5183" for this suite. STEP: Destroying namespace "webhook-5183-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.608 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":26,"skipped":376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:46.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:02.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2085" for this suite. • [SLOW TEST:16.105 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":5,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:58.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:02.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6899" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":8,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:00.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 00:47:00.448: INFO: Waiting up to 5m0s for pod "downward-api-06253951-cda6-4530-a067-6c3f61fbb18d" in namespace "downward-api-8049" to be "Succeeded or Failed" Oct 23 00:47:00.451: INFO: Pod "downward-api-06253951-cda6-4530-a067-6c3f61fbb18d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.705814ms Oct 23 00:47:02.454: INFO: Pod "downward-api-06253951-cda6-4530-a067-6c3f61fbb18d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006195195s Oct 23 00:47:04.459: INFO: Pod "downward-api-06253951-cda6-4530-a067-6c3f61fbb18d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010875824s Oct 23 00:47:06.463: INFO: Pod "downward-api-06253951-cda6-4530-a067-6c3f61fbb18d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014371793s STEP: Saw pod success Oct 23 00:47:06.463: INFO: Pod "downward-api-06253951-cda6-4530-a067-6c3f61fbb18d" satisfied condition "Succeeded or Failed" Oct 23 00:47:06.465: INFO: Trying to get logs from node node1 pod downward-api-06253951-cda6-4530-a067-6c3f61fbb18d container dapi-container: STEP: delete the pod Oct 23 00:47:06.488: INFO: Waiting for pod downward-api-06253951-cda6-4530-a067-6c3f61fbb18d to disappear Oct 23 00:47:06.490: INFO: Pod downward-api-06253951-cda6-4530-a067-6c3f61fbb18d no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:06.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8049" for this suite. • [SLOW TEST:6.103 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":549,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:03.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Oct 23 00:47:03.098: INFO: The status of Pod pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:05.103: INFO: The status of Pod pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:07.104: INFO: The status of Pod pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:09.102: INFO: The status of Pod pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:11.104: INFO: The status of Pod pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:13.104: INFO: The status of Pod pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 23 00:47:13.617: INFO: Successfully updated pod "pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31" Oct 23 00:47:13.617: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31" in namespace "pods-1430" to be "terminated due to deadline exceeded" Oct 23 00:47:13.620: INFO: Pod "pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31": Phase="Running", Reason="", readiness=true. Elapsed: 2.250522ms Oct 23 00:47:15.624: INFO: Pod "pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.006505803s Oct 23 00:47:15.624: INFO: Pod "pod-update-activedeadlineseconds-94d0c4c3-75c0-4459-a176-7ffd26333c31" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:15.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1430" for this suite. • [SLOW TEST:12.572 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":183,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:15.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 00:47:18.718: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:18.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5204" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":195,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:06.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:47:12.553: INFO: Deleting pod "var-expansion-da36598e-f105-4a87-97b6-5ee19bd631fc" in namespace "var-expansion-6048" Oct 23 00:47:12.560: INFO: Wait up to 5m0s for pod "var-expansion-da36598e-f105-4a87-97b6-5ee19bd631fc" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:24.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6048" for this suite. • [SLOW TEST:18.059 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":29,"skipped":555,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:01.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Oct 23 00:47:01.466: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:03.471: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:05.470: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:07.471: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:09.470: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Oct 23 00:47:09.487: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:11.490: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:13.493: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:15.491: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 23 00:47:15.506: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 00:47:15.508: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 00:47:17.510: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 00:47:17.514: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 00:47:19.510: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 00:47:19.514: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 00:47:21.510: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 00:47:21.514: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 00:47:23.511: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 00:47:23.514: INFO: Pod pod-with-poststart-exec-hook still exists Oct 23 00:47:25.509: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 23 00:47:25.512: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:25.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6493" for this suite. • [SLOW TEST:24.090 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:25.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:47:25.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01c3b0b6-299e-4659-a84f-a45bd6837625" in namespace "downward-api-269" to be "Succeeded or Failed" Oct 23 00:47:25.680: INFO: Pod "downwardapi-volume-01c3b0b6-299e-4659-a84f-a45bd6837625": Phase="Pending", Reason="", readiness=false. Elapsed: 1.973511ms Oct 23 00:47:27.684: INFO: Pod "downwardapi-volume-01c3b0b6-299e-4659-a84f-a45bd6837625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005292036s Oct 23 00:47:29.688: INFO: Pod "downwardapi-volume-01c3b0b6-299e-4659-a84f-a45bd6837625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009194691s STEP: Saw pod success Oct 23 00:47:29.688: INFO: Pod "downwardapi-volume-01c3b0b6-299e-4659-a84f-a45bd6837625" satisfied condition "Succeeded or Failed" Oct 23 00:47:29.690: INFO: Trying to get logs from node node2 pod downwardapi-volume-01c3b0b6-299e-4659-a84f-a45bd6837625 container client-container: STEP: delete the pod Oct 23 00:47:29.700: INFO: Waiting for pod downwardapi-volume-01c3b0b6-299e-4659-a84f-a45bd6837625 to disappear Oct 23 00:47:29.702: INFO: Pod downwardapi-volume-01c3b0b6-299e-4659-a84f-a45bd6837625 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:29.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-269" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":456,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:29.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:34.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7635" for this suite. • [SLOW TEST:5.006 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":29,"skipped":461,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:24.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:47:24.663: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 23 00:47:32.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5553 --namespace=crd-publish-openapi-5553 create -f -' Oct 23 00:47:33.156: INFO: stderr: "" Oct 23 00:47:33.156: INFO: stdout: "e2e-test-crd-publish-openapi-9962-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 23 00:47:33.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5553 --namespace=crd-publish-openapi-5553 delete e2e-test-crd-publish-openapi-9962-crds test-cr' Oct 23 00:47:33.329: INFO: stderr: "" Oct 23 00:47:33.329: INFO: stdout: "e2e-test-crd-publish-openapi-9962-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 23 00:47:33.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5553 --namespace=crd-publish-openapi-5553 apply -f -' Oct 23 00:47:33.633: INFO: stderr: "" Oct 23 00:47:33.633: INFO: stdout: "e2e-test-crd-publish-openapi-9962-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 23 00:47:33.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5553 --namespace=crd-publish-openapi-5553 delete e2e-test-crd-publish-openapi-9962-crds test-cr' Oct 23 00:47:33.781: INFO: stderr: "" Oct 23 00:47:33.781: INFO: stdout: "e2e-test-crd-publish-openapi-9962-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 23 00:47:33.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5553 explain e2e-test-crd-publish-openapi-9962-crds' Oct 23 00:47:34.129: INFO: stderr: "" Oct 23 00:47:34.129: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9962-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:37.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5553" for this suite. • [SLOW TEST:13.030 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":30,"skipped":582,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:18.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4329 STEP: creating service affinity-clusterip in namespace services-4329 STEP: creating replication controller affinity-clusterip in namespace services-4329 I1023 00:47:18.777872 30 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4329, replica count: 3 I1023 00:47:21.829184 30 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:47:24.829355 30 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:47:27.830095 30 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:47:27.834: INFO: Creating new exec pod Oct 23 00:47:32.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4329 exec execpod-affinityprcsf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Oct 23 00:47:33.188: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Oct 23 00:47:33.189: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:47:33.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4329 exec execpod-affinityprcsf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.35.176 80' Oct 23 00:47:33.418: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.35.176 80\nConnection to 10.233.35.176 80 port [tcp/http] succeeded!\n" Oct 23 00:47:33.418: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:47:33.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4329 exec execpod-affinityprcsf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.35.176:80/ ; done' Oct 23 00:47:33.699: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.35.176:80/\n" Oct 23 00:47:33.699: INFO: stdout: "\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg\naffinity-clusterip-rs8rg" Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Received response from host: affinity-clusterip-rs8rg Oct 23 00:47:33.699: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4329, will wait for the garbage collector to delete the pods Oct 23 00:47:33.764: INFO: Deleting ReplicationController affinity-clusterip took: 4.09777ms Oct 23 00:47:33.865: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.608822ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:43.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4329" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:25.140 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":196,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:43.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 23 00:47:43.941: INFO: The status of Pod labelsupdate59e2dc07-0c97-4888-acce-add839257a6f is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:45.945: INFO: The status of Pod labelsupdate59e2dc07-0c97-4888-acce-add839257a6f is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:47:47.945: INFO: The status of Pod labelsupdate59e2dc07-0c97-4888-acce-add839257a6f is Running (Ready = true) Oct 23 00:47:48.463: INFO: Successfully updated pod "labelsupdate59e2dc07-0c97-4888-acce-add839257a6f" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:50.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7517" for this suite. • [SLOW TEST:6.580 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":204,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:49.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1023 00:46:50.943387 31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:47:52.959: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:52.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5289" for this suite. • [SLOW TEST:63.092 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":28,"skipped":470,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:34.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Oct 23 00:47:34.757: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:47:56.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4324" for this suite. • [SLOW TEST:22.272 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":30,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:57.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 23 00:47:57.104: INFO: Waiting up to 5m0s for pod "pod-47983d45-f3da-4d12-85e8-7cf1be4ef8a7" in namespace "emptydir-7061" to be "Succeeded or Failed" Oct 23 00:47:57.106: INFO: Pod "pod-47983d45-f3da-4d12-85e8-7cf1be4ef8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.962532ms Oct 23 00:47:59.109: INFO: Pod "pod-47983d45-f3da-4d12-85e8-7cf1be4ef8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005229476s Oct 23 00:48:01.112: INFO: Pod "pod-47983d45-f3da-4d12-85e8-7cf1be4ef8a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008332856s STEP: Saw pod success Oct 23 00:48:01.112: INFO: Pod "pod-47983d45-f3da-4d12-85e8-7cf1be4ef8a7" satisfied condition "Succeeded or Failed" Oct 23 00:48:01.116: INFO: Trying to get logs from node node2 pod pod-47983d45-f3da-4d12-85e8-7cf1be4ef8a7 container test-container: STEP: delete the pod Oct 23 00:48:01.130: INFO: Waiting for pod pod-47983d45-f3da-4d12-85e8-7cf1be4ef8a7 to disappear Oct 23 00:48:01.133: INFO: Pod pod-47983d45-f3da-4d12-85e8-7cf1be4ef8a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:01.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7061" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":489,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:07.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 00:46:07.138189 36 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:01.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1980" for this suite. • [SLOW TEST:114.046 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":11,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:37.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-lq89 STEP: Creating a pod to test atomic-volume-subpath Oct 23 00:47:37.753: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lq89" in namespace "subpath-5665" to be "Succeeded or Failed" Oct 23 00:47:37.757: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Pending", Reason="", readiness=false. Elapsed: 3.277179ms Oct 23 00:47:39.760: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006676562s Oct 23 00:47:41.764: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 4.010952657s Oct 23 00:47:43.768: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 6.014627782s Oct 23 00:47:45.771: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 8.017497532s Oct 23 00:47:47.775: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 10.021801409s Oct 23 00:47:49.778: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 12.025023749s Oct 23 00:47:51.782: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 14.029023778s Oct 23 00:47:53.787: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 16.033393935s Oct 23 00:47:55.790: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 18.036243808s Oct 23 00:47:57.794: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 20.040690574s Oct 23 00:47:59.797: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Running", Reason="", readiness=true. Elapsed: 22.04419035s Oct 23 00:48:01.803: INFO: Pod "pod-subpath-test-configmap-lq89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05015551s STEP: Saw pod success Oct 23 00:48:01.803: INFO: Pod "pod-subpath-test-configmap-lq89" satisfied condition "Succeeded or Failed" Oct 23 00:48:01.806: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-lq89 container test-container-subpath-configmap-lq89: STEP: delete the pod Oct 23 00:48:02.026: INFO: Waiting for pod pod-subpath-test-configmap-lq89 to disappear Oct 23 00:48:02.028: INFO: Pod pod-subpath-test-configmap-lq89 no longer exists STEP: Deleting pod pod-subpath-test-configmap-lq89 Oct 23 00:48:02.028: INFO: Deleting pod "pod-subpath-test-configmap-lq89" in namespace "subpath-5665" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:02.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5665" for this suite. • [SLOW TEST:24.332 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":598,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:50.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Oct 23 00:47:50.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-845 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 23 00:47:50.678: INFO: stderr: "" Oct 23 00:47:50.678: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Oct 23 00:47:50.678: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 23 00:47:50.678: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-845" to be "running and ready, or succeeded" Oct 23 00:47:50.681: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.274174ms Oct 23 00:47:52.687: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008628517s Oct 23 00:47:54.693: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.015136602s Oct 23 00:47:54.693: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 23 00:47:54.693: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 23 00:47:54.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-845 logs logs-generator logs-generator' Oct 23 00:47:54.862: INFO: stderr: "" Oct 23 00:47:54.863: INFO: stdout: "I1023 00:47:52.906983 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/694 341\nI1023 00:47:53.107982 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/m7k 364\nI1023 00:47:53.307388 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/jfp 490\nI1023 00:47:53.507727 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/2bw 455\nI1023 00:47:53.707009 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/hsf2 276\nI1023 00:47:53.907354 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/qj6 379\nI1023 00:47:54.107718 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/25g4 309\nI1023 00:47:54.307158 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/jqv6 456\nI1023 00:47:54.507759 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vt97 564\nI1023 00:47:54.707162 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/bx5x 484\n" STEP: limiting log lines Oct 23 00:47:54.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-845 logs logs-generator logs-generator --tail=1' Oct 23 00:47:55.008: INFO: stderr: "" Oct 23 00:47:55.008: INFO: stdout: "I1023 00:47:54.907607 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/fccr 291\n" Oct 23 00:47:55.008: INFO: got output "I1023 00:47:54.907607 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/fccr 291\n" STEP: limiting log bytes Oct 23 00:47:55.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-845 logs logs-generator logs-generator --limit-bytes=1' Oct 23 00:47:55.157: INFO: stderr: "" Oct 23 00:47:55.158: INFO: stdout: "I" Oct 23 00:47:55.158: INFO: got output "I" STEP: exposing timestamps Oct 23 00:47:55.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-845 logs logs-generator logs-generator --tail=1 --timestamps' Oct 23 00:47:55.308: INFO: stderr: "" Oct 23 00:47:55.308: INFO: stdout: "2021-10-23T00:47:55.107190984Z I1023 00:47:55.107027 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/dfq 411\n" Oct 23 00:47:55.308: INFO: got output "2021-10-23T00:47:55.107190984Z I1023 00:47:55.107027 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/dfq 411\n" STEP: restricting to a time range Oct 23 00:47:57.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-845 logs logs-generator logs-generator --since=1s' Oct 23 00:47:57.990: INFO: stderr: "" Oct 23 00:47:57.990: INFO: stdout: "I1023 00:47:57.107283 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/vpdx 345\nI1023 00:47:57.307750 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/kgb 221\nI1023 00:47:57.507128 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/hzlg 450\nI1023 00:47:57.707646 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/2w9 244\nI1023 00:47:57.907678 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/2qjn 331\n" Oct 23 00:47:57.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-845 logs logs-generator logs-generator --since=24h' Oct 23 00:47:58.153: INFO: stderr: "" Oct 23 00:47:58.153: INFO: stdout: "I1023 00:47:52.906983 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/694 341\nI1023 00:47:53.107982 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/m7k 364\nI1023 00:47:53.307388 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/jfp 490\nI1023 00:47:53.507727 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/2bw 455\nI1023 00:47:53.707009 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/hsf2 276\nI1023 00:47:53.907354 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/qj6 379\nI1023 00:47:54.107718 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/25g4 309\nI1023 00:47:54.307158 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/jqv6 456\nI1023 00:47:54.507759 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vt97 564\nI1023 00:47:54.707162 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/bx5x 484\nI1023 00:47:54.907607 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/fccr 291\nI1023 00:47:55.107027 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/dfq 411\nI1023 00:47:55.307412 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/q42 389\nI1023 00:47:55.507906 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/bfms 395\nI1023 00:47:55.707072 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/684s 239\nI1023 00:47:55.907553 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/pzf8 271\nI1023 00:47:56.107924 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/l7d 431\nI1023 00:47:56.307304 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/875 269\nI1023 00:47:56.507779 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/lwj 507\nI1023 00:47:56.707249 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/nbrw 350\nI1023 00:47:56.907649 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/9hd 209\nI1023 00:47:57.107283 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/vpdx 345\nI1023 00:47:57.307750 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/kgb 221\nI1023 00:47:57.507128 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/hzlg 450\nI1023 00:47:57.707646 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/2w9 244\nI1023 00:47:57.907678 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/2qjn 331\nI1023 00:47:58.108021 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/prbg 275\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Oct 23 00:47:58.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-845 delete pod logs-generator' Oct 23 00:48:03.847: INFO: stderr: "" Oct 23 00:48:03.847: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:03.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-845" for this suite. • [SLOW TEST:13.359 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":13,"skipped":208,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:01.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 23 00:48:01.240: INFO: Waiting up to 5m0s for pod "pod-03873194-07d0-4437-969b-9456122f3b46" in namespace "emptydir-9271" to be "Succeeded or Failed" Oct 23 00:48:01.243: INFO: Pod "pod-03873194-07d0-4437-969b-9456122f3b46": Phase="Pending", Reason="", readiness=false. Elapsed: 3.043889ms Oct 23 00:48:03.246: INFO: Pod "pod-03873194-07d0-4437-969b-9456122f3b46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006678743s Oct 23 00:48:05.251: INFO: Pod "pod-03873194-07d0-4437-969b-9456122f3b46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011798743s Oct 23 00:48:07.256: INFO: Pod "pod-03873194-07d0-4437-969b-9456122f3b46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015998134s STEP: Saw pod success Oct 23 00:48:07.256: INFO: Pod "pod-03873194-07d0-4437-969b-9456122f3b46" satisfied condition "Succeeded or Failed" Oct 23 00:48:07.258: INFO: Trying to get logs from node node2 pod pod-03873194-07d0-4437-969b-9456122f3b46 container test-container: STEP: delete the pod Oct 23 00:48:07.270: INFO: Waiting for pod pod-03873194-07d0-4437-969b-9456122f3b46 to disappear Oct 23 00:48:07.272: INFO: Pod pod-03873194-07d0-4437-969b-9456122f3b46 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:07.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9271" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:01.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Oct 23 00:48:01.213: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:03.216: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:05.216: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:07.216: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:08.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1243" for this suite. • [SLOW TEST:7.065 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":32,"skipped":504,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:08.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:48:08.292: INFO: Creating deployment "test-recreate-deployment" Oct 23 00:48:08.295: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 23 00:48:08.299: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 23 00:48:10.304: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 23 00:48:10.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546888, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546888, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546888, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:48:12.310: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 23 00:48:12.317: INFO: Updating deployment test-recreate-deployment Oct 23 00:48:12.317: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 00:48:12.356: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4832 ef651bb6-939c-4923-8788-e9be4e62c4f5 65493 2 2021-10-23 00:48:08 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-10-23 00:48:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 00:48:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052dcb38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-10-23 00:48:12 +0000 UTC,LastTransitionTime:2021-10-23 00:48:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-10-23 00:48:12 +0000 UTC,LastTransitionTime:2021-10-23 00:48:08 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 23 00:48:12.359: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-4832 8d1f7ec5-c7b5-46f7-be15-d6c8dd697da9 65492 1 2021-10-23 00:48:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment ef651bb6-939c-4923-8788-e9be4e62c4f5 0xc0052dd0c0 0xc0052dd0c1}] [] [{kube-controller-manager Update apps/v1 2021-10-23 00:48:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef651bb6-939c-4923-8788-e9be4e62c4f5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052dd148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:48:12.360: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 23 00:48:12.360: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-4832 e302b334-ace0-41e9-89dd-876178294f83 65481 2 2021-10-23 00:48:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment ef651bb6-939c-4923-8788-e9be4e62c4f5 0xc0052dcf87 0xc0052dcf88}] [] [{kube-controller-manager Update apps/v1 2021-10-23 00:48:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef651bb6-939c-4923-8788-e9be4e62c4f5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0052dd028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:48:12.363: INFO: Pod "test-recreate-deployment-85d47dcb4-glml4" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-glml4 test-recreate-deployment-85d47dcb4- deployment-4832 e2cb4982-cf11-42f9-96b7-19b7584b8cd7 65494 0 2021-10-23 00:48:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 8d1f7ec5-c7b5-46f7-be15-d6c8dd697da9 0xc0052dd5ef 0xc0052dd600}] [] [{kube-controller-manager Update v1 2021-10-23 00:48:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d1f7ec5-c7b5-46f7-be15-d6c8dd697da9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-23 00:48:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wb724,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wb724,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:48:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:48:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:48:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:48:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-10-23 00:48:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:12.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4832" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":33,"skipped":513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:02.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1023 00:47:13.013301 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:48:15.031: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 23 00:48:15.031: INFO: Deleting pod "simpletest-rc-to-be-deleted-2vkln" in namespace "gc-9794" Oct 23 00:48:15.039: INFO: Deleting pod "simpletest-rc-to-be-deleted-4prdg" in namespace "gc-9794" Oct 23 00:48:15.046: INFO: Deleting pod "simpletest-rc-to-be-deleted-7b74g" in namespace "gc-9794" Oct 23 00:48:15.051: INFO: Deleting pod "simpletest-rc-to-be-deleted-9sjrx" in namespace "gc-9794" Oct 23 00:48:15.057: INFO: Deleting pod "simpletest-rc-to-be-deleted-dpj66" in namespace "gc-9794" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:15.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9794" for this suite. • [SLOW TEST:72.167 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":6,"skipped":206,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:03.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:48:04.269: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Oct 23 00:48:06.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546884, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546884, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546884, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546884, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:48:09.287: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:19.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2085" for this suite. STEP: Destroying namespace "webhook-2085-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.546 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":14,"skipped":209,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:15.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-13e7003e-26e4-4955-9dd6-0bc5205f5589 STEP: Creating a pod to test consume secrets Oct 23 00:48:15.120: INFO: Waiting up to 5m0s for pod "pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621" in namespace "secrets-3271" to be "Succeeded or Failed" Oct 23 00:48:15.123: INFO: Pod "pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621": Phase="Pending", Reason="", readiness=false. Elapsed: 3.458423ms Oct 23 00:48:17.127: INFO: Pod "pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007059639s Oct 23 00:48:19.131: INFO: Pod "pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010914299s Oct 23 00:48:21.134: INFO: Pod "pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014385542s STEP: Saw pod success Oct 23 00:48:21.134: INFO: Pod "pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621" satisfied condition "Succeeded or Failed" Oct 23 00:48:21.137: INFO: Trying to get logs from node node2 pod pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621 container secret-env-test: STEP: delete the pod Oct 23 00:48:21.148: INFO: Waiting for pod pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621 to disappear Oct 23 00:48:21.151: INFO: Pod pod-secrets-2092cc46-da55-4f9d-9917-b7ddbd226621 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:21.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3271" for this suite. • [SLOW TEST:6.076 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":211,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:12.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:25.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9896" for this suite. • [SLOW TEST:13.095 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":34,"skipped":549,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:02.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Oct 23 00:48:02.121: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:26.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8426" for this suite. • [SLOW TEST:23.922 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":32,"skipped":625,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:19.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 23 00:48:19.448: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 23 00:48:19.452: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 23 00:48:19.452: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 23 00:48:19.464: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 23 00:48:19.464: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 23 00:48:19.476: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 23 00:48:19.476: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 23 00:48:26.527: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:26.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-6476" for this suite. • [SLOW TEST:7.124 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":15,"skipped":216,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:21.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:48:21.203: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8" in namespace "downward-api-2137" to be "Succeeded or Failed" Oct 23 00:48:21.206: INFO: Pod "downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744474ms Oct 23 00:48:23.208: INFO: Pod "downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005062071s Oct 23 00:48:25.213: INFO: Pod "downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009993603s Oct 23 00:48:27.218: INFO: Pod "downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015625585s STEP: Saw pod success Oct 23 00:48:27.219: INFO: Pod "downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8" satisfied condition "Succeeded or Failed" Oct 23 00:48:27.221: INFO: Trying to get logs from node node2 pod downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8 container client-container: STEP: delete the pod Oct 23 00:48:27.234: INFO: Waiting for pod downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8 to disappear Oct 23 00:48:27.236: INFO: Pod downwardapi-volume-83e6cd53-f304-4934-8d6e-f946c28b41e8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:27.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2137" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":214,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:25.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:48:25.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40" in namespace "downward-api-6510" to be "Succeeded or Failed" Oct 23 00:48:25.613: INFO: Pod "downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40": Phase="Pending", Reason="", readiness=false. Elapsed: 3.2347ms Oct 23 00:48:27.617: INFO: Pod "downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007276825s Oct 23 00:48:29.622: INFO: Pod "downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011717355s Oct 23 00:48:31.628: INFO: Pod "downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017650748s STEP: Saw pod success Oct 23 00:48:31.628: INFO: Pod "downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40" satisfied condition "Succeeded or Failed" Oct 23 00:48:31.630: INFO: Trying to get logs from node node2 pod downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40 container client-container: STEP: delete the pod Oct 23 00:48:31.737: INFO: Waiting for pod downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40 to disappear Oct 23 00:48:31.743: INFO: Pod downwardapi-volume-c1623da5-468f-4888-8670-0b5fb8136a40 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:31.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6510" for this suite. • [SLOW TEST:6.180 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:07.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2851 STEP: creating service affinity-clusterip-transition in namespace services-2851 STEP: creating replication controller affinity-clusterip-transition in namespace services-2851 I1023 00:48:07.426685 36 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-2851, replica count: 3 I1023 00:48:10.477865 36 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:13.480456 36 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:48:13.484: INFO: Creating new exec pod Oct 23 00:48:20.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2851 exec execpod-affinity8vlbk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Oct 23 00:48:20.774: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Oct 23 00:48:20.774: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:48:20.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2851 exec execpod-affinity8vlbk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.34.196 80' Oct 23 00:48:21.061: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.34.196 80\nConnection to 10.233.34.196 80 port [tcp/http] succeeded!\n" Oct 23 00:48:21.061: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:48:21.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2851 exec execpod-affinity8vlbk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.34.196:80/ ; done' Oct 23 00:48:21.378: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n" Oct 23 00:48:21.378: INFO: stdout: "\naffinity-clusterip-transition-xzn8h\naffinity-clusterip-transition-xzn8h\naffinity-clusterip-transition-xzn8h\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-5pb75\naffinity-clusterip-transition-xzn8h\naffinity-clusterip-transition-xzn8h\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-5pb75\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-5pb75\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-5pb75\naffinity-clusterip-transition-5pb75" Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-xzn8h Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-xzn8h Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-xzn8h Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-5pb75 Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-xzn8h Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-xzn8h Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-5pb75 Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-5pb75 Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-5pb75 Oct 23 00:48:21.378: INFO: Received response from host: affinity-clusterip-transition-5pb75 Oct 23 00:48:21.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2851 exec execpod-affinity8vlbk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.34.196:80/ ; done' Oct 23 00:48:22.122: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.196:80/\n" Oct 23 00:48:22.122: INFO: stdout: "\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj\naffinity-clusterip-transition-mbtdj" Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Received response from host: affinity-clusterip-transition-mbtdj Oct 23 00:48:22.123: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2851, will wait for the garbage collector to delete the pods Oct 23 00:48:22.188: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.314327ms Oct 23 00:48:22.288: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.680825ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:33.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2851" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:26.510 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":186,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:26.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:48:26.594: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23" in namespace "projected-2557" to be "Succeeded or Failed" Oct 23 00:48:26.597: INFO: Pod "downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.511577ms Oct 23 00:48:28.600: INFO: Pod "downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005125137s Oct 23 00:48:30.604: INFO: Pod "downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009414209s Oct 23 00:48:32.608: INFO: Pod "downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013641758s Oct 23 00:48:34.611: INFO: Pod "downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016930071s STEP: Saw pod success Oct 23 00:48:34.611: INFO: Pod "downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23" satisfied condition "Succeeded or Failed" Oct 23 00:48:34.614: INFO: Trying to get logs from node node2 pod downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23 container client-container: STEP: delete the pod Oct 23 00:48:34.680: INFO: Waiting for pod downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23 to disappear Oct 23 00:48:34.682: INFO: Pod downwardapi-volume-9d7c0e91-b209-4c35-ad6b-91101f3b9a23 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:34.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2557" for this suite. • [SLOW TEST:8.127 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":220,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":562,"failed":0} [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:31.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:37.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9235" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":36,"skipped":562,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:33.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:48:33.959: INFO: The status of Pod pod-secrets-d69f78bd-8f83-4990-81ab-51edda168f30 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:35.962: INFO: The status of Pod pod-secrets-d69f78bd-8f83-4990-81ab-51edda168f30 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:37.964: INFO: The status of Pod pod-secrets-d69f78bd-8f83-4990-81ab-51edda168f30 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:37.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9875" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":14,"skipped":188,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:27.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:48:27.286: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3774 I1023 00:48:27.307020 35 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3774, replica count: 1 I1023 00:48:28.359109 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:29.359736 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:30.360331 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:31.362692 35 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:48:31.470: INFO: Created: latency-svc-vcnhf Oct 23 00:48:31.474: INFO: Got endpoints: latency-svc-vcnhf [10.821897ms] Oct 23 00:48:31.480: INFO: Created: latency-svc-gwr78 Oct 23 00:48:31.482: INFO: Got endpoints: latency-svc-gwr78 [7.804957ms] Oct 23 00:48:31.482: INFO: Created: latency-svc-x8fj4 Oct 23 00:48:31.485: INFO: Got endpoints: latency-svc-x8fj4 [9.542123ms] Oct 23 00:48:31.485: INFO: Created: latency-svc-44wxv Oct 23 00:48:31.487: INFO: Got endpoints: latency-svc-44wxv [13.09673ms] Oct 23 00:48:31.488: INFO: Created: latency-svc-9pc5d Oct 23 00:48:31.490: INFO: Got endpoints: latency-svc-9pc5d [15.378627ms] Oct 23 00:48:31.491: INFO: Created: latency-svc-kdvt8 Oct 23 00:48:31.493: INFO: Got endpoints: latency-svc-kdvt8 [18.587941ms] Oct 23 00:48:31.494: INFO: Created: latency-svc-gwdjj Oct 23 00:48:31.496: INFO: Got endpoints: latency-svc-gwdjj [21.50832ms] Oct 23 00:48:31.497: INFO: Created: latency-svc-rzmxj Oct 23 00:48:31.499: INFO: Got endpoints: latency-svc-rzmxj [24.136771ms] Oct 23 00:48:31.500: INFO: Created: latency-svc-t5gbk Oct 23 00:48:31.502: INFO: Got endpoints: latency-svc-t5gbk [27.196342ms] Oct 23 00:48:31.503: INFO: Created: latency-svc-8rg79 Oct 23 00:48:31.505: INFO: Got endpoints: latency-svc-8rg79 [30.267348ms] Oct 23 00:48:31.506: INFO: Created: latency-svc-wqqgl Oct 23 00:48:31.508: INFO: Created: latency-svc-spsxz Oct 23 00:48:31.508: INFO: Got endpoints: latency-svc-wqqgl [33.271514ms] Oct 23 00:48:31.510: INFO: Got endpoints: latency-svc-spsxz [35.339299ms] Oct 23 00:48:31.511: INFO: Created: latency-svc-kx82s Oct 23 00:48:31.513: INFO: Got endpoints: latency-svc-kx82s [38.352353ms] Oct 23 00:48:31.514: INFO: Created: latency-svc-qrcsq Oct 23 00:48:31.516: INFO: Got endpoints: latency-svc-qrcsq [40.952788ms] Oct 23 00:48:31.516: INFO: Created: latency-svc-5wqf7 Oct 23 00:48:31.519: INFO: Got endpoints: latency-svc-5wqf7 [43.880113ms] Oct 23 00:48:31.519: INFO: Created: latency-svc-cjhwd Oct 23 00:48:31.521: INFO: Got endpoints: latency-svc-cjhwd [46.293072ms] Oct 23 00:48:31.522: INFO: Created: latency-svc-v44wx Oct 23 00:48:31.523: INFO: Got endpoints: latency-svc-v44wx [41.264093ms] Oct 23 00:48:31.524: INFO: Created: latency-svc-w44ph Oct 23 00:48:31.526: INFO: Got endpoints: latency-svc-w44ph [7.656539ms] Oct 23 00:48:31.527: INFO: Created: latency-svc-kksrg Oct 23 00:48:31.529: INFO: Created: latency-svc-hsf5w Oct 23 00:48:31.530: INFO: Got endpoints: latency-svc-kksrg [45.569637ms] Oct 23 00:48:31.532: INFO: Got endpoints: latency-svc-hsf5w [44.025574ms] Oct 23 00:48:31.533: INFO: Created: latency-svc-wb8zf Oct 23 00:48:31.536: INFO: Got endpoints: latency-svc-wb8zf [46.217904ms] Oct 23 00:48:31.537: INFO: Created: latency-svc-jt5lc Oct 23 00:48:31.539: INFO: Got endpoints: latency-svc-jt5lc [45.437348ms] Oct 23 00:48:31.540: INFO: Created: latency-svc-lrnvk Oct 23 00:48:31.542: INFO: Got endpoints: latency-svc-lrnvk [45.047522ms] Oct 23 00:48:31.542: INFO: Created: latency-svc-r5cfr Oct 23 00:48:31.545: INFO: Got endpoints: latency-svc-r5cfr [46.02106ms] Oct 23 00:48:31.545: INFO: Created: latency-svc-bbz5d Oct 23 00:48:31.547: INFO: Got endpoints: latency-svc-bbz5d [45.364569ms] Oct 23 00:48:31.547: INFO: Created: latency-svc-m84x2 Oct 23 00:48:31.550: INFO: Got endpoints: latency-svc-m84x2 [44.446303ms] Oct 23 00:48:31.550: INFO: Created: latency-svc-tfh9d Oct 23 00:48:31.553: INFO: Got endpoints: latency-svc-tfh9d [44.394488ms] Oct 23 00:48:31.553: INFO: Created: latency-svc-t9ngb Oct 23 00:48:31.555: INFO: Got endpoints: latency-svc-t9ngb [44.815398ms] Oct 23 00:48:31.557: INFO: Created: latency-svc-hmrp4 Oct 23 00:48:31.558: INFO: Got endpoints: latency-svc-hmrp4 [45.30122ms] Oct 23 00:48:31.559: INFO: Created: latency-svc-nqtdd Oct 23 00:48:31.561: INFO: Got endpoints: latency-svc-nqtdd [44.955142ms] Oct 23 00:48:31.562: INFO: Created: latency-svc-h7w47 Oct 23 00:48:31.564: INFO: Got endpoints: latency-svc-h7w47 [42.823744ms] Oct 23 00:48:31.564: INFO: Created: latency-svc-xd8vg Oct 23 00:48:31.567: INFO: Created: latency-svc-9t9gl Oct 23 00:48:31.569: INFO: Created: latency-svc-g6gvv Oct 23 00:48:31.572: INFO: Got endpoints: latency-svc-xd8vg [48.941561ms] Oct 23 00:48:31.573: INFO: Created: latency-svc-6sjx2 Oct 23 00:48:31.576: INFO: Created: latency-svc-k7f46 Oct 23 00:48:31.578: INFO: Created: latency-svc-6gcrb Oct 23 00:48:31.581: INFO: Created: latency-svc-wqwlg Oct 23 00:48:31.584: INFO: Created: latency-svc-mvmx7 Oct 23 00:48:31.586: INFO: Created: latency-svc-zghqp Oct 23 00:48:31.589: INFO: Created: latency-svc-rl6lx Oct 23 00:48:31.591: INFO: Created: latency-svc-gf8zw Oct 23 00:48:31.594: INFO: Created: latency-svc-dzcvq Oct 23 00:48:31.597: INFO: Created: latency-svc-4xtd8 Oct 23 00:48:31.599: INFO: Created: latency-svc-sncl8 Oct 23 00:48:31.601: INFO: Created: latency-svc-x672k Oct 23 00:48:31.605: INFO: Created: latency-svc-v6sks Oct 23 00:48:31.623: INFO: Got endpoints: latency-svc-9t9gl [96.326682ms] Oct 23 00:48:31.628: INFO: Created: latency-svc-mstx2 Oct 23 00:48:31.672: INFO: Got endpoints: latency-svc-g6gvv [141.966169ms] Oct 23 00:48:31.677: INFO: Created: latency-svc-4zffg Oct 23 00:48:31.725: INFO: Got endpoints: latency-svc-6sjx2 [193.771892ms] Oct 23 00:48:31.742: INFO: Created: latency-svc-mwdqf Oct 23 00:48:31.773: INFO: Got endpoints: latency-svc-k7f46 [236.274039ms] Oct 23 00:48:31.778: INFO: Created: latency-svc-w4vbp Oct 23 00:48:31.824: INFO: Got endpoints: latency-svc-6gcrb [284.899469ms] Oct 23 00:48:31.829: INFO: Created: latency-svc-7t9f2 Oct 23 00:48:31.873: INFO: Got endpoints: latency-svc-wqwlg [331.351412ms] Oct 23 00:48:31.878: INFO: Created: latency-svc-jtn6h Oct 23 00:48:31.923: INFO: Got endpoints: latency-svc-mvmx7 [377.889722ms] Oct 23 00:48:31.928: INFO: Created: latency-svc-426nf Oct 23 00:48:31.974: INFO: Got endpoints: latency-svc-zghqp [426.507346ms] Oct 23 00:48:31.980: INFO: Created: latency-svc-27lc7 Oct 23 00:48:32.024: INFO: Got endpoints: latency-svc-rl6lx [474.329726ms] Oct 23 00:48:32.030: INFO: Created: latency-svc-tzv47 Oct 23 00:48:32.073: INFO: Got endpoints: latency-svc-gf8zw [520.565969ms] Oct 23 00:48:32.080: INFO: Created: latency-svc-zllqk Oct 23 00:48:32.124: INFO: Got endpoints: latency-svc-dzcvq [568.531514ms] Oct 23 00:48:32.129: INFO: Created: latency-svc-wt29g Oct 23 00:48:32.173: INFO: Got endpoints: latency-svc-4xtd8 [614.670698ms] Oct 23 00:48:32.179: INFO: Created: latency-svc-jl48s Oct 23 00:48:32.224: INFO: Got endpoints: latency-svc-sncl8 [662.83439ms] Oct 23 00:48:32.229: INFO: Created: latency-svc-khq55 Oct 23 00:48:32.274: INFO: Got endpoints: latency-svc-x672k [709.547765ms] Oct 23 00:48:32.280: INFO: Created: latency-svc-zprfq Oct 23 00:48:32.323: INFO: Got endpoints: latency-svc-v6sks [750.317586ms] Oct 23 00:48:32.329: INFO: Created: latency-svc-l5hvl Oct 23 00:48:32.373: INFO: Got endpoints: latency-svc-mstx2 [750.005334ms] Oct 23 00:48:32.378: INFO: Created: latency-svc-788xv Oct 23 00:48:32.424: INFO: Got endpoints: latency-svc-4zffg [751.264747ms] Oct 23 00:48:32.429: INFO: Created: latency-svc-cvzqb Oct 23 00:48:32.474: INFO: Got endpoints: latency-svc-mwdqf [748.160831ms] Oct 23 00:48:32.481: INFO: Created: latency-svc-w6l46 Oct 23 00:48:32.523: INFO: Got endpoints: latency-svc-w4vbp [750.12494ms] Oct 23 00:48:32.529: INFO: Created: latency-svc-w7rxh Oct 23 00:48:32.575: INFO: Got endpoints: latency-svc-7t9f2 [751.319205ms] Oct 23 00:48:32.582: INFO: Created: latency-svc-czrjg Oct 23 00:48:32.624: INFO: Got endpoints: latency-svc-jtn6h [750.551295ms] Oct 23 00:48:32.630: INFO: Created: latency-svc-rnkbh Oct 23 00:48:32.673: INFO: Got endpoints: latency-svc-426nf [749.991797ms] Oct 23 00:48:32.678: INFO: Created: latency-svc-cszz8 Oct 23 00:48:32.723: INFO: Got endpoints: latency-svc-27lc7 [749.053942ms] Oct 23 00:48:32.728: INFO: Created: latency-svc-2qshk Oct 23 00:48:32.773: INFO: Got endpoints: latency-svc-tzv47 [749.333516ms] Oct 23 00:48:32.779: INFO: Created: latency-svc-zx6pv Oct 23 00:48:32.823: INFO: Got endpoints: latency-svc-zllqk [750.155492ms] Oct 23 00:48:32.829: INFO: Created: latency-svc-nc698 Oct 23 00:48:32.874: INFO: Got endpoints: latency-svc-wt29g [750.170683ms] Oct 23 00:48:32.879: INFO: Created: latency-svc-f4w2w Oct 23 00:48:32.923: INFO: Got endpoints: latency-svc-jl48s [749.320759ms] Oct 23 00:48:32.928: INFO: Created: latency-svc-bt5h5 Oct 23 00:48:32.974: INFO: Got endpoints: latency-svc-khq55 [750.774058ms] Oct 23 00:48:32.980: INFO: Created: latency-svc-x8j79 Oct 23 00:48:33.024: INFO: Got endpoints: latency-svc-zprfq [749.99086ms] Oct 23 00:48:33.029: INFO: Created: latency-svc-sxjqp Oct 23 00:48:33.073: INFO: Got endpoints: latency-svc-l5hvl [749.74489ms] Oct 23 00:48:33.078: INFO: Created: latency-svc-cp9km Oct 23 00:48:33.124: INFO: Got endpoints: latency-svc-788xv [750.861791ms] Oct 23 00:48:33.129: INFO: Created: latency-svc-gmxsq Oct 23 00:48:33.173: INFO: Got endpoints: latency-svc-cvzqb [749.700959ms] Oct 23 00:48:33.179: INFO: Created: latency-svc-j9wxp Oct 23 00:48:33.223: INFO: Got endpoints: latency-svc-w6l46 [749.692634ms] Oct 23 00:48:33.229: INFO: Created: latency-svc-ksxf9 Oct 23 00:48:33.274: INFO: Got endpoints: latency-svc-w7rxh [751.027016ms] Oct 23 00:48:33.279: INFO: Created: latency-svc-k6vfb Oct 23 00:48:33.323: INFO: Got endpoints: latency-svc-czrjg [748.29216ms] Oct 23 00:48:33.329: INFO: Created: latency-svc-75m9z Oct 23 00:48:33.373: INFO: Got endpoints: latency-svc-rnkbh [749.471361ms] Oct 23 00:48:33.378: INFO: Created: latency-svc-5rrld Oct 23 00:48:33.423: INFO: Got endpoints: latency-svc-cszz8 [750.69122ms] Oct 23 00:48:33.429: INFO: Created: latency-svc-ps25b Oct 23 00:48:33.473: INFO: Got endpoints: latency-svc-2qshk [750.489918ms] Oct 23 00:48:33.479: INFO: Created: latency-svc-4kqgq Oct 23 00:48:33.523: INFO: Got endpoints: latency-svc-zx6pv [749.598478ms] Oct 23 00:48:33.529: INFO: Created: latency-svc-9pqlp Oct 23 00:48:33.573: INFO: Got endpoints: latency-svc-nc698 [749.775149ms] Oct 23 00:48:33.579: INFO: Created: latency-svc-t4hbp Oct 23 00:48:33.623: INFO: Got endpoints: latency-svc-f4w2w [748.561761ms] Oct 23 00:48:33.628: INFO: Created: latency-svc-vh6s8 Oct 23 00:48:33.674: INFO: Got endpoints: latency-svc-bt5h5 [751.007453ms] Oct 23 00:48:33.679: INFO: Created: latency-svc-7zjt4 Oct 23 00:48:33.723: INFO: Got endpoints: latency-svc-x8j79 [748.806603ms] Oct 23 00:48:33.728: INFO: Created: latency-svc-7qs4x Oct 23 00:48:33.773: INFO: Got endpoints: latency-svc-sxjqp [749.556915ms] Oct 23 00:48:33.779: INFO: Created: latency-svc-kgcwv Oct 23 00:48:33.823: INFO: Got endpoints: latency-svc-cp9km [750.576725ms] Oct 23 00:48:33.830: INFO: Created: latency-svc-n427q Oct 23 00:48:33.874: INFO: Got endpoints: latency-svc-gmxsq [750.364235ms] Oct 23 00:48:33.879: INFO: Created: latency-svc-pwzcl Oct 23 00:48:33.924: INFO: Got endpoints: latency-svc-j9wxp [750.933607ms] Oct 23 00:48:33.930: INFO: Created: latency-svc-ggzb9 Oct 23 00:48:33.974: INFO: Got endpoints: latency-svc-ksxf9 [750.559698ms] Oct 23 00:48:33.979: INFO: Created: latency-svc-shdmg Oct 23 00:48:34.023: INFO: Got endpoints: latency-svc-k6vfb [749.136712ms] Oct 23 00:48:34.029: INFO: Created: latency-svc-qwsmr Oct 23 00:48:34.086: INFO: Got endpoints: latency-svc-75m9z [762.659054ms] Oct 23 00:48:34.092: INFO: Created: latency-svc-gvn5k Oct 23 00:48:34.124: INFO: Got endpoints: latency-svc-5rrld [750.570432ms] Oct 23 00:48:34.129: INFO: Created: latency-svc-gs4bs Oct 23 00:48:34.173: INFO: Got endpoints: latency-svc-ps25b [749.832805ms] Oct 23 00:48:34.179: INFO: Created: latency-svc-2q74j Oct 23 00:48:34.223: INFO: Got endpoints: latency-svc-4kqgq [749.820401ms] Oct 23 00:48:34.229: INFO: Created: latency-svc-kzpwz Oct 23 00:48:34.273: INFO: Got endpoints: latency-svc-9pqlp [749.870171ms] Oct 23 00:48:34.279: INFO: Created: latency-svc-sdj8n Oct 23 00:48:34.323: INFO: Got endpoints: latency-svc-t4hbp [749.86148ms] Oct 23 00:48:34.333: INFO: Created: latency-svc-75mw7 Oct 23 00:48:34.373: INFO: Got endpoints: latency-svc-vh6s8 [750.603181ms] Oct 23 00:48:34.380: INFO: Created: latency-svc-pbjpt Oct 23 00:48:34.424: INFO: Got endpoints: latency-svc-7zjt4 [750.305458ms] Oct 23 00:48:34.430: INFO: Created: latency-svc-db8cx Oct 23 00:48:34.473: INFO: Got endpoints: latency-svc-7qs4x [749.412125ms] Oct 23 00:48:34.480: INFO: Created: latency-svc-vtk9r Oct 23 00:48:34.523: INFO: Got endpoints: latency-svc-kgcwv [750.059102ms] Oct 23 00:48:34.529: INFO: Created: latency-svc-s84hx Oct 23 00:48:34.573: INFO: Got endpoints: latency-svc-n427q [749.931015ms] Oct 23 00:48:34.579: INFO: Created: latency-svc-mrdz6 Oct 23 00:48:34.673: INFO: Got endpoints: latency-svc-pwzcl [798.684114ms] Oct 23 00:48:34.678: INFO: Created: latency-svc-4ccgm Oct 23 00:48:34.723: INFO: Got endpoints: latency-svc-ggzb9 [798.343395ms] Oct 23 00:48:34.730: INFO: Created: latency-svc-4bllk Oct 23 00:48:34.773: INFO: Got endpoints: latency-svc-shdmg [799.289556ms] Oct 23 00:48:34.779: INFO: Created: latency-svc-t7lmx Oct 23 00:48:34.824: INFO: Got endpoints: latency-svc-qwsmr [800.422481ms] Oct 23 00:48:34.830: INFO: Created: latency-svc-phpxt Oct 23 00:48:34.875: INFO: Got endpoints: latency-svc-gvn5k [788.996395ms] Oct 23 00:48:34.880: INFO: Created: latency-svc-th2mz Oct 23 00:48:34.923: INFO: Got endpoints: latency-svc-gs4bs [799.465494ms] Oct 23 00:48:34.929: INFO: Created: latency-svc-c9gm7 Oct 23 00:48:34.974: INFO: Got endpoints: latency-svc-2q74j [800.598733ms] Oct 23 00:48:34.979: INFO: Created: latency-svc-rd986 Oct 23 00:48:35.024: INFO: Got endpoints: latency-svc-kzpwz [800.743948ms] Oct 23 00:48:35.029: INFO: Created: latency-svc-zvzbx Oct 23 00:48:35.073: INFO: Got endpoints: latency-svc-sdj8n [800.301177ms] Oct 23 00:48:35.080: INFO: Created: latency-svc-85p59 Oct 23 00:48:35.124: INFO: Got endpoints: latency-svc-75mw7 [800.659366ms] Oct 23 00:48:35.129: INFO: Created: latency-svc-skp7q Oct 23 00:48:35.173: INFO: Got endpoints: latency-svc-pbjpt [799.993207ms] Oct 23 00:48:35.179: INFO: Created: latency-svc-cc2tf Oct 23 00:48:35.223: INFO: Got endpoints: latency-svc-db8cx [798.537534ms] Oct 23 00:48:35.229: INFO: Created: latency-svc-kqtsq Oct 23 00:48:35.274: INFO: Got endpoints: latency-svc-vtk9r [800.842487ms] Oct 23 00:48:35.279: INFO: Created: latency-svc-pzsxs Oct 23 00:48:35.324: INFO: Got endpoints: latency-svc-s84hx [800.345623ms] Oct 23 00:48:35.329: INFO: Created: latency-svc-55w8t Oct 23 00:48:35.373: INFO: Got endpoints: latency-svc-mrdz6 [800.142205ms] Oct 23 00:48:35.379: INFO: Created: latency-svc-469ks Oct 23 00:48:35.423: INFO: Got endpoints: latency-svc-4ccgm [750.499315ms] Oct 23 00:48:35.429: INFO: Created: latency-svc-tncm7 Oct 23 00:48:35.473: INFO: Got endpoints: latency-svc-4bllk [750.523509ms] Oct 23 00:48:35.478: INFO: Created: latency-svc-dnjxr Oct 23 00:48:35.524: INFO: Got endpoints: latency-svc-t7lmx [750.28307ms] Oct 23 00:48:35.529: INFO: Created: latency-svc-qlxfc Oct 23 00:48:35.573: INFO: Got endpoints: latency-svc-phpxt [749.65358ms] Oct 23 00:48:35.578: INFO: Created: latency-svc-4nk7s Oct 23 00:48:35.624: INFO: Got endpoints: latency-svc-th2mz [748.202635ms] Oct 23 00:48:35.629: INFO: Created: latency-svc-25k52 Oct 23 00:48:35.672: INFO: Got endpoints: latency-svc-c9gm7 [749.164246ms] Oct 23 00:48:35.678: INFO: Created: latency-svc-fjcmf Oct 23 00:48:35.724: INFO: Got endpoints: latency-svc-rd986 [750.092032ms] Oct 23 00:48:35.730: INFO: Created: latency-svc-wttlb Oct 23 00:48:35.774: INFO: Got endpoints: latency-svc-zvzbx [749.548156ms] Oct 23 00:48:35.779: INFO: Created: latency-svc-bt9dn Oct 23 00:48:35.823: INFO: Got endpoints: latency-svc-85p59 [749.917446ms] Oct 23 00:48:35.830: INFO: Created: latency-svc-jzxck Oct 23 00:48:35.875: INFO: Got endpoints: latency-svc-skp7q [750.724362ms] Oct 23 00:48:35.879: INFO: Created: latency-svc-bthbv Oct 23 00:48:35.923: INFO: Got endpoints: latency-svc-cc2tf [750.122099ms] Oct 23 00:48:35.929: INFO: Created: latency-svc-wlkl5 Oct 23 00:48:35.974: INFO: Got endpoints: latency-svc-kqtsq [751.269605ms] Oct 23 00:48:35.980: INFO: Created: latency-svc-js6k9 Oct 23 00:48:36.024: INFO: Got endpoints: latency-svc-pzsxs [750.147695ms] Oct 23 00:48:36.029: INFO: Created: latency-svc-hqhf6 Oct 23 00:48:36.073: INFO: Got endpoints: latency-svc-55w8t [749.228812ms] Oct 23 00:48:36.078: INFO: Created: latency-svc-xcptp Oct 23 00:48:36.124: INFO: Got endpoints: latency-svc-469ks [750.922827ms] Oct 23 00:48:36.131: INFO: Created: latency-svc-5tk2m Oct 23 00:48:36.173: INFO: Got endpoints: latency-svc-tncm7 [749.80759ms] Oct 23 00:48:36.181: INFO: Created: latency-svc-zbrhk Oct 23 00:48:36.223: INFO: Got endpoints: latency-svc-dnjxr [750.012892ms] Oct 23 00:48:36.230: INFO: Created: latency-svc-d7vgs Oct 23 00:48:36.274: INFO: Got endpoints: latency-svc-qlxfc [750.077094ms] Oct 23 00:48:36.279: INFO: Created: latency-svc-wdtvr Oct 23 00:48:36.324: INFO: Got endpoints: latency-svc-4nk7s [750.573372ms] Oct 23 00:48:36.329: INFO: Created: latency-svc-vnx8f Oct 23 00:48:36.374: INFO: Got endpoints: latency-svc-25k52 [750.32947ms] Oct 23 00:48:36.380: INFO: Created: latency-svc-7j76n Oct 23 00:48:36.423: INFO: Got endpoints: latency-svc-fjcmf [750.670396ms] Oct 23 00:48:36.430: INFO: Created: latency-svc-m5wg5 Oct 23 00:48:36.473: INFO: Got endpoints: latency-svc-wttlb [748.683546ms] Oct 23 00:48:36.478: INFO: Created: latency-svc-gc6jw Oct 23 00:48:36.525: INFO: Got endpoints: latency-svc-bt9dn [750.807677ms] Oct 23 00:48:36.531: INFO: Created: latency-svc-vsvzf Oct 23 00:48:36.574: INFO: Got endpoints: latency-svc-jzxck [750.600559ms] Oct 23 00:48:36.580: INFO: Created: latency-svc-vm66t Oct 23 00:48:36.623: INFO: Got endpoints: latency-svc-bthbv [748.768795ms] Oct 23 00:48:36.629: INFO: Created: latency-svc-tkczg Oct 23 00:48:36.674: INFO: Got endpoints: latency-svc-wlkl5 [750.206438ms] Oct 23 00:48:36.679: INFO: Created: latency-svc-s89wb Oct 23 00:48:36.723: INFO: Got endpoints: latency-svc-js6k9 [749.344266ms] Oct 23 00:48:36.730: INFO: Created: latency-svc-xmddx Oct 23 00:48:36.773: INFO: Got endpoints: latency-svc-hqhf6 [748.961451ms] Oct 23 00:48:36.778: INFO: Created: latency-svc-kcg2w Oct 23 00:48:36.825: INFO: Got endpoints: latency-svc-xcptp [751.941102ms] Oct 23 00:48:36.831: INFO: Created: latency-svc-2qkms Oct 23 00:48:36.873: INFO: Got endpoints: latency-svc-5tk2m [748.585971ms] Oct 23 00:48:36.883: INFO: Created: latency-svc-dmjrk Oct 23 00:48:36.923: INFO: Got endpoints: latency-svc-zbrhk [749.363091ms] Oct 23 00:48:36.928: INFO: Created: latency-svc-gx7jl Oct 23 00:48:36.974: INFO: Got endpoints: latency-svc-d7vgs [750.574736ms] Oct 23 00:48:36.980: INFO: Created: latency-svc-hpm5s Oct 23 00:48:37.023: INFO: Got endpoints: latency-svc-wdtvr [748.948052ms] Oct 23 00:48:37.028: INFO: Created: latency-svc-thphc Oct 23 00:48:37.074: INFO: Got endpoints: latency-svc-vnx8f [749.543397ms] Oct 23 00:48:37.078: INFO: Created: latency-svc-xxpmn Oct 23 00:48:37.123: INFO: Got endpoints: latency-svc-7j76n [748.667937ms] Oct 23 00:48:37.129: INFO: Created: latency-svc-lsctr Oct 23 00:48:37.174: INFO: Got endpoints: latency-svc-m5wg5 [750.855074ms] Oct 23 00:48:37.180: INFO: Created: latency-svc-gxvlc Oct 23 00:48:37.223: INFO: Got endpoints: latency-svc-gc6jw [749.737557ms] Oct 23 00:48:37.228: INFO: Created: latency-svc-xwr55 Oct 23 00:48:37.274: INFO: Got endpoints: latency-svc-vsvzf [749.678438ms] Oct 23 00:48:37.280: INFO: Created: latency-svc-h4qlt Oct 23 00:48:37.323: INFO: Got endpoints: latency-svc-vm66t [749.48027ms] Oct 23 00:48:37.331: INFO: Created: latency-svc-ddh52 Oct 23 00:48:37.373: INFO: Got endpoints: latency-svc-tkczg [749.892964ms] Oct 23 00:48:37.380: INFO: Created: latency-svc-cprf9 Oct 23 00:48:37.423: INFO: Got endpoints: latency-svc-s89wb [749.678089ms] Oct 23 00:48:37.430: INFO: Created: latency-svc-jzjxn Oct 23 00:48:37.473: INFO: Got endpoints: latency-svc-xmddx [749.132907ms] Oct 23 00:48:37.480: INFO: Created: latency-svc-4s8nk Oct 23 00:48:37.523: INFO: Got endpoints: latency-svc-kcg2w [750.486773ms] Oct 23 00:48:37.528: INFO: Created: latency-svc-hcx7t Oct 23 00:48:37.573: INFO: Got endpoints: latency-svc-2qkms [748.17108ms] Oct 23 00:48:37.580: INFO: Created: latency-svc-lkzg5 Oct 23 00:48:37.623: INFO: Got endpoints: latency-svc-dmjrk [750.356871ms] Oct 23 00:48:37.630: INFO: Created: latency-svc-7znfq Oct 23 00:48:37.673: INFO: Got endpoints: latency-svc-gx7jl [750.50145ms] Oct 23 00:48:37.681: INFO: Created: latency-svc-hm4cw Oct 23 00:48:37.723: INFO: Got endpoints: latency-svc-hpm5s [749.241038ms] Oct 23 00:48:37.729: INFO: Created: latency-svc-r9nvp Oct 23 00:48:37.774: INFO: Got endpoints: latency-svc-thphc [750.721208ms] Oct 23 00:48:37.781: INFO: Created: latency-svc-8n2ts Oct 23 00:48:37.823: INFO: Got endpoints: latency-svc-xxpmn [749.335659ms] Oct 23 00:48:37.829: INFO: Created: latency-svc-g9n8d Oct 23 00:48:37.873: INFO: Got endpoints: latency-svc-lsctr [750.342701ms] Oct 23 00:48:37.878: INFO: Created: latency-svc-ftm9h Oct 23 00:48:37.924: INFO: Got endpoints: latency-svc-gxvlc [749.473482ms] Oct 23 00:48:37.929: INFO: Created: latency-svc-n7vqm Oct 23 00:48:37.974: INFO: Got endpoints: latency-svc-xwr55 [750.745844ms] Oct 23 00:48:37.980: INFO: Created: latency-svc-b6czw Oct 23 00:48:38.023: INFO: Got endpoints: latency-svc-h4qlt [748.744456ms] Oct 23 00:48:38.029: INFO: Created: latency-svc-k76lb Oct 23 00:48:38.073: INFO: Got endpoints: latency-svc-ddh52 [749.633626ms] Oct 23 00:48:38.078: INFO: Created: latency-svc-rgbm9 Oct 23 00:48:38.125: INFO: Got endpoints: latency-svc-cprf9 [751.133084ms] Oct 23 00:48:38.130: INFO: Created: latency-svc-xl89l Oct 23 00:48:38.173: INFO: Got endpoints: latency-svc-jzjxn [749.888463ms] Oct 23 00:48:38.180: INFO: Created: latency-svc-4zckd Oct 23 00:48:38.225: INFO: Got endpoints: latency-svc-4s8nk [751.968043ms] Oct 23 00:48:38.230: INFO: Created: latency-svc-5hg4v Oct 23 00:48:38.274: INFO: Got endpoints: latency-svc-hcx7t [750.064968ms] Oct 23 00:48:38.279: INFO: Created: latency-svc-dr8hh Oct 23 00:48:38.324: INFO: Got endpoints: latency-svc-lkzg5 [750.192452ms] Oct 23 00:48:38.330: INFO: Created: latency-svc-54qdl Oct 23 00:48:38.374: INFO: Got endpoints: latency-svc-7znfq [750.756992ms] Oct 23 00:48:38.379: INFO: Created: latency-svc-ff4lr Oct 23 00:48:38.423: INFO: Got endpoints: latency-svc-hm4cw [749.978053ms] Oct 23 00:48:38.432: INFO: Created: latency-svc-fb7wx Oct 23 00:48:38.474: INFO: Got endpoints: latency-svc-r9nvp [750.400593ms] Oct 23 00:48:38.480: INFO: Created: latency-svc-hvjp5 Oct 23 00:48:38.524: INFO: Got endpoints: latency-svc-8n2ts [750.911568ms] Oct 23 00:48:38.530: INFO: Created: latency-svc-wc5sc Oct 23 00:48:38.573: INFO: Got endpoints: latency-svc-g9n8d [750.418887ms] Oct 23 00:48:38.580: INFO: Created: latency-svc-47xts Oct 23 00:48:38.623: INFO: Got endpoints: latency-svc-ftm9h [750.283317ms] Oct 23 00:48:38.630: INFO: Created: latency-svc-zjn7p Oct 23 00:48:38.724: INFO: Got endpoints: latency-svc-n7vqm [800.256114ms] Oct 23 00:48:38.729: INFO: Created: latency-svc-rwtkj Oct 23 00:48:38.773: INFO: Got endpoints: latency-svc-b6czw [799.769624ms] Oct 23 00:48:38.778: INFO: Created: latency-svc-js4wq Oct 23 00:48:38.824: INFO: Got endpoints: latency-svc-k76lb [800.842595ms] Oct 23 00:48:38.829: INFO: Created: latency-svc-gf7lg Oct 23 00:48:38.875: INFO: Got endpoints: latency-svc-rgbm9 [801.723057ms] Oct 23 00:48:38.880: INFO: Created: latency-svc-lq6fc Oct 23 00:48:38.923: INFO: Got endpoints: latency-svc-xl89l [798.494293ms] Oct 23 00:48:38.929: INFO: Created: latency-svc-z2xwq Oct 23 00:48:38.973: INFO: Got endpoints: latency-svc-4zckd [799.993382ms] Oct 23 00:48:38.979: INFO: Created: latency-svc-98nd4 Oct 23 00:48:39.023: INFO: Got endpoints: latency-svc-5hg4v [798.540392ms] Oct 23 00:48:39.029: INFO: Created: latency-svc-mrgsw Oct 23 00:48:39.073: INFO: Got endpoints: latency-svc-dr8hh [799.785694ms] Oct 23 00:48:39.078: INFO: Created: latency-svc-965dd Oct 23 00:48:39.130: INFO: Got endpoints: latency-svc-54qdl [806.080483ms] Oct 23 00:48:39.135: INFO: Created: latency-svc-tt69b Oct 23 00:48:39.173: INFO: Got endpoints: latency-svc-ff4lr [798.845003ms] Oct 23 00:48:39.178: INFO: Created: latency-svc-24snl Oct 23 00:48:39.223: INFO: Got endpoints: latency-svc-fb7wx [799.476147ms] Oct 23 00:48:39.228: INFO: Created: latency-svc-6dg6s Oct 23 00:48:39.274: INFO: Got endpoints: latency-svc-hvjp5 [799.856554ms] Oct 23 00:48:39.279: INFO: Created: latency-svc-26z27 Oct 23 00:48:39.325: INFO: Got endpoints: latency-svc-wc5sc [800.08014ms] Oct 23 00:48:39.330: INFO: Created: latency-svc-fqbhg Oct 23 00:48:39.373: INFO: Got endpoints: latency-svc-47xts [799.980101ms] Oct 23 00:48:39.379: INFO: Created: latency-svc-sdsch Oct 23 00:48:39.424: INFO: Got endpoints: latency-svc-zjn7p [801.00007ms] Oct 23 00:48:39.474: INFO: Got endpoints: latency-svc-rwtkj [750.444077ms] Oct 23 00:48:39.524: INFO: Got endpoints: latency-svc-js4wq [750.869309ms] Oct 23 00:48:39.573: INFO: Got endpoints: latency-svc-gf7lg [749.044242ms] Oct 23 00:48:39.624: INFO: Got endpoints: latency-svc-lq6fc [748.597842ms] Oct 23 00:48:39.673: INFO: Got endpoints: latency-svc-z2xwq [750.112789ms] Oct 23 00:48:39.723: INFO: Got endpoints: latency-svc-98nd4 [749.462367ms] Oct 23 00:48:39.773: INFO: Got endpoints: latency-svc-mrgsw [749.493572ms] Oct 23 00:48:39.823: INFO: Got endpoints: latency-svc-965dd [749.628047ms] Oct 23 00:48:39.873: INFO: Got endpoints: latency-svc-tt69b [742.658252ms] Oct 23 00:48:39.924: INFO: Got endpoints: latency-svc-24snl [750.526233ms] Oct 23 00:48:39.973: INFO: Got endpoints: latency-svc-6dg6s [749.852376ms] Oct 23 00:48:40.024: INFO: Got endpoints: latency-svc-26z27 [750.165918ms] Oct 23 00:48:40.073: INFO: Got endpoints: latency-svc-fqbhg [748.44965ms] Oct 23 00:48:40.124: INFO: Got endpoints: latency-svc-sdsch [750.894467ms] Oct 23 00:48:40.124: INFO: Latencies: [7.656539ms 7.804957ms 9.542123ms 13.09673ms 15.378627ms 18.587941ms 21.50832ms 24.136771ms 27.196342ms 30.267348ms 33.271514ms 35.339299ms 38.352353ms 40.952788ms 41.264093ms 42.823744ms 43.880113ms 44.025574ms 44.394488ms 44.446303ms 44.815398ms 44.955142ms 45.047522ms 45.30122ms 45.364569ms 45.437348ms 45.569637ms 46.02106ms 46.217904ms 46.293072ms 48.941561ms 96.326682ms 141.966169ms 193.771892ms 236.274039ms 284.899469ms 331.351412ms 377.889722ms 426.507346ms 474.329726ms 520.565969ms 568.531514ms 614.670698ms 662.83439ms 709.547765ms 742.658252ms 748.160831ms 748.17108ms 748.202635ms 748.29216ms 748.44965ms 748.561761ms 748.585971ms 748.597842ms 748.667937ms 748.683546ms 748.744456ms 748.768795ms 748.806603ms 748.948052ms 748.961451ms 749.044242ms 749.053942ms 749.132907ms 749.136712ms 749.164246ms 749.228812ms 749.241038ms 749.320759ms 749.333516ms 749.335659ms 749.344266ms 749.363091ms 749.412125ms 749.462367ms 749.471361ms 749.473482ms 749.48027ms 749.493572ms 749.543397ms 749.548156ms 749.556915ms 749.598478ms 749.628047ms 749.633626ms 749.65358ms 749.678089ms 749.678438ms 749.692634ms 749.700959ms 749.737557ms 749.74489ms 749.775149ms 749.80759ms 749.820401ms 749.832805ms 749.852376ms 749.86148ms 749.870171ms 749.888463ms 749.892964ms 749.917446ms 749.931015ms 749.978053ms 749.99086ms 749.991797ms 750.005334ms 750.012892ms 750.059102ms 750.064968ms 750.077094ms 750.092032ms 750.112789ms 750.122099ms 750.12494ms 750.147695ms 750.155492ms 750.165918ms 750.170683ms 750.192452ms 750.206438ms 750.28307ms 750.283317ms 750.305458ms 750.317586ms 750.32947ms 750.342701ms 750.356871ms 750.364235ms 750.400593ms 750.418887ms 750.444077ms 750.486773ms 750.489918ms 750.499315ms 750.50145ms 750.523509ms 750.526233ms 750.551295ms 750.559698ms 750.570432ms 750.573372ms 750.574736ms 750.576725ms 750.600559ms 750.603181ms 750.670396ms 750.69122ms 750.721208ms 750.724362ms 750.745844ms 750.756992ms 750.774058ms 750.807677ms 750.855074ms 750.861791ms 750.869309ms 750.894467ms 750.911568ms 750.922827ms 750.933607ms 751.007453ms 751.027016ms 751.133084ms 751.264747ms 751.269605ms 751.319205ms 751.941102ms 751.968043ms 762.659054ms 788.996395ms 798.343395ms 798.494293ms 798.537534ms 798.540392ms 798.684114ms 798.845003ms 799.289556ms 799.465494ms 799.476147ms 799.769624ms 799.785694ms 799.856554ms 799.980101ms 799.993207ms 799.993382ms 800.08014ms 800.142205ms 800.256114ms 800.301177ms 800.345623ms 800.422481ms 800.598733ms 800.659366ms 800.743948ms 800.842487ms 800.842595ms 801.00007ms 801.723057ms 806.080483ms] Oct 23 00:48:40.125: INFO: 50 %ile: 749.892964ms Oct 23 00:48:40.125: INFO: 90 %ile: 799.769624ms Oct 23 00:48:40.125: INFO: 99 %ile: 801.723057ms Oct 23 00:48:40.125: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:40.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3774" for this suite. • [SLOW TEST:12.876 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":9,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:26.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:48:26.054: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 23 00:48:26.059: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 23 00:48:31.062: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 23 00:48:33.067: INFO: Creating deployment "test-rolling-update-deployment" Oct 23 00:48:33.070: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 23 00:48:33.075: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Oct 23 00:48:35.081: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 23 00:48:35.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:48:37.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:48:39.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546913, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:48:41.085: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 00:48:41.093: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8547 b201aae2-8306-41f7-8b3d-f9dd4a6db537 67151 1 2021-10-23 00:48:33 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-10-23 00:48:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 00:48:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006190d58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-23 00:48:33 +0000 UTC,LastTransitionTime:2021-10-23 00:48:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-10-23 00:48:40 +0000 UTC,LastTransitionTime:2021-10-23 00:48:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 23 00:48:41.096: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-8547 68839bb2-1cbc-4d15-b4bf-365b72ff5aae 67142 1 2021-10-23 00:48:33 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b201aae2-8306-41f7-8b3d-f9dd4a6db537 0xc006191387 0xc006191388}] [] [{kube-controller-manager Update apps/v1 2021-10-23 00:48:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b201aae2-8306-41f7-8b3d-f9dd4a6db537\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006191458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:48:41.096: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 23 00:48:41.097: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8547 bad59ff6-a13f-4ee1-a3c4-0270a6648dc6 67150 2 2021-10-23 00:48:26 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b201aae2-8306-41f7-8b3d-f9dd4a6db537 0xc006191267 0xc006191268}] [] [{e2e.test Update apps/v1 2021-10-23 00:48:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 00:48:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b201aae2-8306-41f7-8b3d-f9dd4a6db537\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006191308 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:48:41.099: INFO: Pod "test-rolling-update-deployment-585b757574-27vd6" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-27vd6 test-rolling-update-deployment-585b757574- deployment-8547 7693714b-6750-4580-a992-d4ce119fc39a 67141 0 2021-10-23 00:48:33 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.82" ], "mac": "82:02:ac:75:aa:0f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.82" ], "mac": "82:02:ac:75:aa:0f", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 68839bb2-1cbc-4d15-b4bf-365b72ff5aae 0xc00619188f 0xc0061918a0}] [] [{kube-controller-manager Update v1 2021-10-23 00:48:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68839bb2-1cbc-4d15-b4bf-365b72ff5aae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:48:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hhrng,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hhrng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:48:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:48:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.82,StartTime:2021-10-23 00:48:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:48:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://4e4920a929329f57e43b099c3feaf7059bd190848914e21abc74f65b3869774a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8547" for this suite. • [SLOW TEST:15.075 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":33,"skipped":629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:34.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4409.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4409.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:48:44.773: INFO: DNS probes using dns-4409/dns-test-e712ad5a-324c-4801-9b88-7cce82aa6884 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:44.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4409" for this suite. • [SLOW TEST:10.088 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":17,"skipped":223,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:11.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1791 STEP: creating service affinity-nodeport-transition in namespace services-1791 STEP: creating replication controller affinity-nodeport-transition in namespace services-1791 I1023 00:46:11.879862 27 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1791, replica count: 3 I1023 00:46:14.930680 27 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:46:17.931411 27 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:46:17.941: INFO: Creating new exec pod Oct 23 00:46:24.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Oct 23 00:46:25.263: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Oct 23 00:46:25.263: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:46:25.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.42.218 80' Oct 23 00:46:25.512: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.42.218 80\nConnection to 10.233.42.218 80 port [tcp/http] succeeded!\n" Oct 23 00:46:25.512: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:46:25.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:25.801: INFO: rc: 1 Oct 23 00:46:25.801: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:26.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:27.214: INFO: rc: 1 Oct 23 00:46:27.215: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:27.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:28.088: INFO: rc: 1 Oct 23 00:46:28.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:28.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:29.041: INFO: rc: 1 Oct 23 00:46:29.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31129 + echo hostName nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:29.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:30.282: INFO: rc: 1 Oct 23 00:46:30.282: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:30.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:31.087: INFO: rc: 1 Oct 23 00:46:31.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:31.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:32.077: INFO: rc: 1 Oct 23 00:46:32.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:32.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:33.155: INFO: rc: 1 Oct 23 00:46:33.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:33.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:34.138: INFO: rc: 1 Oct 23 00:46:34.138: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:34.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:35.129: INFO: rc: 1 Oct 23 00:46:35.129: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:35.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:36.038: INFO: rc: 1 Oct 23 00:46:36.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:36.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:37.070: INFO: rc: 1 Oct 23 00:46:37.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:37.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:38.049: INFO: rc: 1 Oct 23 00:46:38.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:38.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:39.024: INFO: rc: 1 Oct 23 00:46:39.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:39.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:40.060: INFO: rc: 1 Oct 23 00:46:40.060: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31129 + echo hostName nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:40.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:41.045: INFO: rc: 1 Oct 23 00:46:41.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:41.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:42.142: INFO: rc: 1 Oct 23 00:46:42.142: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:42.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:43.424: INFO: rc: 1 Oct 23 00:46:43.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:43.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:44.566: INFO: rc: 1 Oct 23 00:46:44.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:44.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:45.263: INFO: rc: 1 Oct 23 00:46:45.263: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:45.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:46.127: INFO: rc: 1 Oct 23 00:46:46.127: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:46.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:47.469: INFO: rc: 1 Oct 23 00:46:47.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:47.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:48.520: INFO: rc: 1 Oct 23 00:46:48.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:48.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:49.938: INFO: rc: 1 Oct 23 00:46:49.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:50.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:51.253: INFO: rc: 1 Oct 23 00:46:51.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:51.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:52.074: INFO: rc: 1 Oct 23 00:46:52.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:52.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:53.042: INFO: rc: 1 Oct 23 00:46:53.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:53.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:54.034: INFO: rc: 1 Oct 23 00:46:54.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:54.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:55.024: INFO: rc: 1 Oct 23 00:46:55.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:55.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:56.050: INFO: rc: 1 Oct 23 00:46:56.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:56.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:57.029: INFO: rc: 1 Oct 23 00:46:57.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:57.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:58.048: INFO: rc: 1 Oct 23 00:46:58.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:58.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:46:59.049: INFO: rc: 1 Oct 23 00:46:59.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:46:59.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:00.067: INFO: rc: 1 Oct 23 00:47:00.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:00.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:01.055: INFO: rc: 1 Oct 23 00:47:01.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:01.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:02.096: INFO: rc: 1 Oct 23 00:47:02.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:02.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:03.055: INFO: rc: 1 Oct 23 00:47:03.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:03.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:04.067: INFO: rc: 1 Oct 23 00:47:04.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:04.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:05.391: INFO: rc: 1 Oct 23 00:47:05.391: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:05.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:06.409: INFO: rc: 1 Oct 23 00:47:06.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:06.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:07.685: INFO: rc: 1 Oct 23 00:47:07.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:07.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:08.391: INFO: rc: 1 Oct 23 00:47:08.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:08.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:09.257: INFO: rc: 1 Oct 23 00:47:09.257: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:09.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:10.175: INFO: rc: 1 Oct 23 00:47:10.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:10.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:11.449: INFO: rc: 1 Oct 23 00:47:11.449: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:11.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:12.043: INFO: rc: 1 Oct 23 00:47:12.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:12.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:13.078: INFO: rc: 1 Oct 23 00:47:13.078: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:13.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:14.119: INFO: rc: 1 Oct 23 00:47:14.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:14.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:15.070: INFO: rc: 1 Oct 23 00:47:15.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:15.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:16.055: INFO: rc: 1 Oct 23 00:47:16.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:16.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:17.324: INFO: rc: 1 Oct 23 00:47:17.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:17.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:18.026: INFO: rc: 1 Oct 23 00:47:18.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:18.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:19.905: INFO: rc: 1 Oct 23 00:47:19.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:20.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:21.039: INFO: rc: 1 Oct 23 00:47:21.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:21.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:22.052: INFO: rc: 1 Oct 23 00:47:22.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:22.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:23.053: INFO: rc: 1 Oct 23 00:47:23.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:23.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:24.042: INFO: rc: 1 Oct 23 00:47:24.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:24.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:25.059: INFO: rc: 1 Oct 23 00:47:25.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31129 + echo hostName nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:25.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:26.045: INFO: rc: 1 Oct 23 00:47:26.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31129 + echo hostName nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:26.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:27.225: INFO: rc: 1 Oct 23 00:47:27.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:27.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:28.029: INFO: rc: 1 Oct 23 00:47:28.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:28.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:29.170: INFO: rc: 1 Oct 23 00:47:29.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:29.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:30.240: INFO: rc: 1 Oct 23 00:47:30.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31129 + echo hostName nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:30.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:31.045: INFO: rc: 1 Oct 23 00:47:31.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:31.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:32.037: INFO: rc: 1 Oct 23 00:47:32.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:32.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:33.172: INFO: rc: 1 Oct 23 00:47:33.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:33.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:34.183: INFO: rc: 1 Oct 23 00:47:34.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:34.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:35.061: INFO: rc: 1 Oct 23 00:47:35.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:35.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:36.017: INFO: rc: 1 Oct 23 00:47:36.017: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:36.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:37.020: INFO: rc: 1 Oct 23 00:47:37.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:37.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:38.055: INFO: rc: 1 Oct 23 00:47:38.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:38.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:39.315: INFO: rc: 1 Oct 23 00:47:39.315: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:39.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:40.052: INFO: rc: 1 Oct 23 00:47:40.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:40.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:41.026: INFO: rc: 1 Oct 23 00:47:41.026: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31129 + echo hostName nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:41.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:42.037: INFO: rc: 1 Oct 23 00:47:42.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:42.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:43.041: INFO: rc: 1 Oct 23 00:47:43.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:43.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:44.038: INFO: rc: 1 Oct 23 00:47:44.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:44.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:45.338: INFO: rc: 1 Oct 23 00:47:45.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:45.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:46.044: INFO: rc: 1 Oct 23 00:47:46.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:46.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:47.038: INFO: rc: 1 Oct 23 00:47:47.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:47.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:48.058: INFO: rc: 1 Oct 23 00:47:48.058: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:48.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:49.916: INFO: rc: 1 Oct 23 00:47:49.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:50.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:51.042: INFO: rc: 1 Oct 23 00:47:51.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:51.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:52.027: INFO: rc: 1 Oct 23 00:47:52.027: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:52.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:53.056: INFO: rc: 1 Oct 23 00:47:53.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:53.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:54.035: INFO: rc: 1 Oct 23 00:47:54.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:54.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:55.390: INFO: rc: 1 Oct 23 00:47:55.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:55.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:56.103: INFO: rc: 1 Oct 23 00:47:56.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:56.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:57.056: INFO: rc: 1 Oct 23 00:47:57.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:57.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:58.424: INFO: rc: 1 Oct 23 00:47:58.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:58.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:47:59.048: INFO: rc: 1 Oct 23 00:47:59.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:47:59.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:00.074: INFO: rc: 1 Oct 23 00:48:00.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:00.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:01.085: INFO: rc: 1 Oct 23 00:48:01.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:01.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:02.201: INFO: rc: 1 Oct 23 00:48:02.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:02.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:03.143: INFO: rc: 1 Oct 23 00:48:03.143: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:03.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:04.269: INFO: rc: 1 Oct 23 00:48:04.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:04.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:05.056: INFO: rc: 1 Oct 23 00:48:05.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:05.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:06.098: INFO: rc: 1 Oct 23 00:48:06.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:06.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:07.040: INFO: rc: 1 Oct 23 00:48:07.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:07.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:08.051: INFO: rc: 1 Oct 23 00:48:08.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:08.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:09.472: INFO: rc: 1 Oct 23 00:48:09.472: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:09.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:10.047: INFO: rc: 1 Oct 23 00:48:10.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:10.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:11.053: INFO: rc: 1 Oct 23 00:48:11.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:11.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:12.049: INFO: rc: 1 Oct 23 00:48:12.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:12.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:13.043: INFO: rc: 1 Oct 23 00:48:13.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:13.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:14.137: INFO: rc: 1 Oct 23 00:48:14.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:14.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:15.019: INFO: rc: 1 Oct 23 00:48:15.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31129 + echo hostName nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:15.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:16.125: INFO: rc: 1 Oct 23 00:48:16.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:16.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:17.099: INFO: rc: 1 Oct 23 00:48:17.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:17.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:18.022: INFO: rc: 1 Oct 23 00:48:18.022: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:18.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:19.943: INFO: rc: 1 Oct 23 00:48:19.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:20.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:21.104: INFO: rc: 1 Oct 23 00:48:21.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:21.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:22.091: INFO: rc: 1 Oct 23 00:48:22.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:22.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:23.083: INFO: rc: 1 Oct 23 00:48:23.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:23.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:24.210: INFO: rc: 1 Oct 23 00:48:24.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:24.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:25.112: INFO: rc: 1 Oct 23 00:48:25.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:25.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:26.049: INFO: rc: 1 Oct 23 00:48:26.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:26.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129' Oct 23 00:48:26.300: INFO: rc: 1 Oct 23 00:48:26.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1791 exec execpod-affinitygjrdz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31129: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31129 nc: connect to 10.10.190.207 port 31129 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:26.301: FAIL: Unexpected error: <*errors.errorString | 0xc0079d62c0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31129 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31129 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001ae9600, 0x779f8f8, 0xc006f44000, 0xc0014a0000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2527 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001881980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001881980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001881980, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 00:48:26.302: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1791, will wait for the garbage collector to delete the pods Oct 23 00:48:26.375: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.105766ms Oct 23 00:48:26.476: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.759405ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-1791". STEP: Found 27 events. Oct 23 00:48:46.494: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-bwt8w: { } Scheduled: Successfully assigned services-1791/affinity-nodeport-transition-bwt8w to node2 Oct 23 00:48:46.494: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-h9v8w: { } Scheduled: Successfully assigned services-1791/affinity-nodeport-transition-h9v8w to node2 Oct 23 00:48:46.494: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-kxqbt: { } Scheduled: Successfully assigned services-1791/affinity-nodeport-transition-kxqbt to node2 Oct 23 00:48:46.494: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitygjrdz: { } Scheduled: Successfully assigned services-1791/execpod-affinitygjrdz to node2 Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:11 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-h9v8w Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:11 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-kxqbt Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:11 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-bwt8w Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:14 +0000 UTC - event for affinity-nodeport-transition-bwt8w: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:14 +0000 UTC - event for affinity-nodeport-transition-h9v8w: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-bwt8w: {kubelet node2} Started: Started container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-bwt8w: {kubelet node2} Created: Created container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-bwt8w: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 700.524391ms Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-h9v8w: {kubelet node2} Created: Created container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-h9v8w: {kubelet node2} Started: Started container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-h9v8w: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 504.775952ms Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-kxqbt: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 511.771671ms Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-kxqbt: {kubelet node2} Created: Created container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:15 +0000 UTC - event for affinity-nodeport-transition-kxqbt: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:16 +0000 UTC - event for affinity-nodeport-transition-kxqbt: {kubelet node2} Started: Started container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:21 +0000 UTC - event for execpod-affinitygjrdz: {kubelet node2} Created: Created container agnhost-container Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:21 +0000 UTC - event for execpod-affinitygjrdz: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:21 +0000 UTC - event for execpod-affinitygjrdz: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 262.022623ms Oct 23 00:48:46.494: INFO: At 2021-10-23 00:46:21 +0000 UTC - event for execpod-affinitygjrdz: {kubelet node2} Started: Started container agnhost-container Oct 23 00:48:46.494: INFO: At 2021-10-23 00:48:26 +0000 UTC - event for affinity-nodeport-transition-bwt8w: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:48:26 +0000 UTC - event for affinity-nodeport-transition-h9v8w: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:48:26 +0000 UTC - event for affinity-nodeport-transition-kxqbt: {kubelet node2} Killing: Stopping container affinity-nodeport-transition Oct 23 00:48:46.494: INFO: At 2021-10-23 00:48:26 +0000 UTC - event for execpod-affinitygjrdz: {kubelet node2} Killing: Stopping container agnhost-container Oct 23 00:48:46.496: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:48:46.496: INFO: Oct 23 00:48:46.500: INFO: Logging node info for node master1 Oct 23 00:48:46.502: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 67209 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:43 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:43 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:43 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:48:43 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:48:46.503: INFO: Logging kubelet events for node master1 Oct 23 00:48:46.505: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 00:48:46.537: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.537: INFO: Container coredns ready: true, restart count 2 Oct 23 00:48:46.537: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:46.537: INFO: Container docker-registry ready: true, restart count 0 Oct 23 00:48:46.537: INFO: Container nginx ready: true, restart count 0 Oct 23 00:48:46.537: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:46.537: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:48:46.537: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:48:46.537: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.537: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:48:46.537: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.537: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 00:48:46.537: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.537: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:48:46.537: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:48:46.537: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:48:46.537: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:48:46.537: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.537: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:48:46.537: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.537: INFO: Container kube-scheduler ready: true, restart count 0 W1023 00:48:46.550202 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:48:46.629: INFO: Latency metrics for node master1 Oct 23 00:48:46.629: INFO: Logging node info for node master2 Oct 23 00:48:46.631: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 67076 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:39 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:39 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:39 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:48:39 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:48:46.632: INFO: Logging kubelet events for node master2 Oct 23 00:48:46.634: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 00:48:46.659: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.659: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:48:46.659: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.659: INFO: Container autoscaler ready: true, restart count 1 Oct 23 00:48:46.659: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:46.659: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:48:46.659: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:48:46.659: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:48:46.659: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:48:46.659: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:48:46.659: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.659: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:48:46.659: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.659: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:48:46.659: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.659: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:48:46.659: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.659: INFO: Container kube-proxy ready: true, restart count 2 W1023 00:48:46.674936 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:48:46.741: INFO: Latency metrics for node master2 Oct 23 00:48:46.741: INFO: Logging node info for node master3 Oct 23 00:48:46.743: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 67206 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:43 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:43 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:43 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:48:43 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:48:46.744: INFO: Logging kubelet events for node master3 Oct 23 00:48:46.746: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 00:48:46.761: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.762: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:48:46.762: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.762: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:48:46.762: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.762: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:48:46.762: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:48:46.762: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:48:46.762: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:48:46.762: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.762: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 00:48:46.762: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:46.762: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:48:46.762: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:48:46.762: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.762: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:48:46.762: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.762: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:48:46.762: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.762: INFO: Container coredns ready: true, restart count 2 W1023 00:48:46.776084 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:48:46.848: INFO: Latency metrics for node master3 Oct 23 00:48:46.848: INFO: Logging node info for node node1 Oct 23 00:48:46.851: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 66844 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:17:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:37 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:48:37 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:48:46.852: INFO: Logging kubelet events for node node1 Oct 23 00:48:46.853: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 00:48:46.870: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:46.870: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:48:46.870: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:48:46.870: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:48:46.870: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 00:48:46.870: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 00:48:46.870: INFO: affinity-nodeport-62bdl started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container affinity-nodeport ready: false, restart count 0 Oct 23 00:48:46.870: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:48:46.870: INFO: test-pod started at 2021-10-23 00:44:39 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container webserver ready: true, restart count 0 Oct 23 00:48:46.870: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:48:46.870: INFO: Container collectd ready: true, restart count 0 Oct 23 00:48:46.870: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:48:46.870: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:48:46.870: INFO: externalname-service-9rwls started at 2021-10-23 00:48:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container externalname-service ready: false, restart count 0 Oct 23 00:48:46.870: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 00:48:46.870: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 00:48:46.870: INFO: Container config-reloader ready: true, restart count 0 Oct 23 00:48:46.870: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 00:48:46.870: INFO: Container grafana ready: true, restart count 0 Oct 23 00:48:46.870: INFO: Container prometheus ready: true, restart count 1 Oct 23 00:48:46.870: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:46.870: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:48:46.870: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 00:48:46.870: INFO: svc-latency-rc-lx48f started at 2021-10-23 00:48:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container svc-latency-rc ready: true, restart count 0 Oct 23 00:48:46.870: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:48:46.870: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:48:46.870: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:48:46.870: INFO: netserver-0 started at 2021-10-23 00:48:38 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container webserver ready: false, restart count 0 Oct 23 00:48:46.870: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 00:48:46.870: INFO: Container discover ready: false, restart count 0 Oct 23 00:48:46.870: INFO: Container init ready: false, restart count 0 Oct 23 00:48:46.870: INFO: Container install ready: false, restart count 0 Oct 23 00:48:46.870: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:46.870: INFO: Container nodereport ready: true, restart count 0 Oct 23 00:48:46.870: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:48:46.870: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:46.870: INFO: Container nginx-proxy ready: true, restart count 2 W1023 00:48:46.887480 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:48:47.281: INFO: Latency metrics for node node1 Oct 23 00:48:47.281: INFO: Logging node info for node node2 Oct 23 00:48:47.284: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 66916 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:38 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:38 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:48:38 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:48:38 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:48:47.286: INFO: Logging kubelet events for node node2 Oct 23 00:48:47.288: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 00:48:47.334: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:48:47.334: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 00:48:47.334: INFO: Container discover ready: false, restart count 0 Oct 23 00:48:47.334: INFO: Container init ready: false, restart count 0 Oct 23 00:48:47.334: INFO: Container install ready: false, restart count 0 Oct 23 00:48:47.334: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 00:48:47.334: INFO: ss2-0 started at 2021-10-23 00:48:14 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container webserver ready: true, restart count 0 Oct 23 00:48:47.334: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:48:47.334: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:48:47.334: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:48:47.334: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:47.334: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:48:47.334: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:48:47.334: INFO: pod-84cc2100-313b-4e9a-badc-c152e86ab356 started at 2021-10-23 00:48:41 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container test-container ready: false, restart count 0 Oct 23 00:48:47.334: INFO: liveness-74269a25-835f-43db-b118-83c14de7aad3 started at 2021-10-23 00:45:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:48:47.334: INFO: affinity-nodeport-tpnkv started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 00:48:47.334: INFO: externalname-service-szgbk started at 2021-10-23 00:48:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container externalname-service ready: true, restart count 0 Oct 23 00:48:47.334: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:48:47.334: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 00:48:47.334: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container tas-extender ready: true, restart count 0 Oct 23 00:48:47.334: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:48:47.334: INFO: Container collectd ready: true, restart count 0 Oct 23 00:48:47.334: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:48:47.334: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:48:47.334: INFO: affinity-nodeport-mrgw7 started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 00:48:47.334: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:48:47.334: INFO: netserver-1 started at 2021-10-23 00:48:38 +0000 UTC (0+1 container statuses recorded) Oct 23 00:48:47.334: INFO: Container webserver ready: false, restart count 0 Oct 23 00:48:47.334: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:48:47.334: INFO: Container nodereport ready: true, restart count 1 Oct 23 00:48:47.334: INFO: Container reconcile ready: true, restart count 0 W1023 00:48:47.348677 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:48:48.029: INFO: Latency metrics for node node2 Oct 23 00:48:48.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1791" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [156.197 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:48:26.301: Unexpected error: <*errors.errorString | 0xc0079d62c0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31129 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31129 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":195,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:41.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 23 00:48:41.302: INFO: Waiting up to 5m0s for pod "pod-84cc2100-313b-4e9a-badc-c152e86ab356" in namespace "emptydir-4003" to be "Succeeded or Failed" Oct 23 00:48:41.304: INFO: Pod "pod-84cc2100-313b-4e9a-badc-c152e86ab356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074138ms Oct 23 00:48:43.307: INFO: Pod "pod-84cc2100-313b-4e9a-badc-c152e86ab356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004992015s Oct 23 00:48:45.310: INFO: Pod "pod-84cc2100-313b-4e9a-badc-c152e86ab356": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007951606s Oct 23 00:48:47.313: INFO: Pod "pod-84cc2100-313b-4e9a-badc-c152e86ab356": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01113349s Oct 23 00:48:49.316: INFO: Pod "pod-84cc2100-313b-4e9a-badc-c152e86ab356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.014852133s STEP: Saw pod success Oct 23 00:48:49.317: INFO: Pod "pod-84cc2100-313b-4e9a-badc-c152e86ab356" satisfied condition "Succeeded or Failed" Oct 23 00:48:49.319: INFO: Trying to get logs from node node2 pod pod-84cc2100-313b-4e9a-badc-c152e86ab356 container test-container: STEP: delete the pod Oct 23 00:48:49.724: INFO: Waiting for pod pod-84cc2100-313b-4e9a-badc-c152e86ab356 to disappear Oct 23 00:48:49.726: INFO: Pod pod-84cc2100-313b-4e9a-badc-c152e86ab356 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:49.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4003" for this suite. • [SLOW TEST:8.474 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":694,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:49.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:49.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9529" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":35,"skipped":715,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:48.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-0e4d19df-6df5-46ed-8f94-8bc0ff6b03df STEP: Creating a pod to test consume secrets Oct 23 00:48:48.088: INFO: Waiting up to 5m0s for pod "pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885" in namespace "secrets-5476" to be "Succeeded or Failed" Oct 23 00:48:48.090: INFO: Pod "pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071621ms Oct 23 00:48:50.094: INFO: Pod "pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005337058s Oct 23 00:48:52.097: INFO: Pod "pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008445097s Oct 23 00:48:54.100: INFO: Pod "pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011615593s STEP: Saw pod success Oct 23 00:48:54.100: INFO: Pod "pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885" satisfied condition "Succeeded or Failed" Oct 23 00:48:54.102: INFO: Trying to get logs from node node2 pod pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885 container secret-volume-test: STEP: delete the pod Oct 23 00:48:54.135: INFO: Waiting for pod pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885 to disappear Oct 23 00:48:54.137: INFO: Pod pod-secrets-34788b7d-0aaf-4469-8fe2-291457e83885 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:54.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5476" for this suite. • [SLOW TEST:6.090 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":198,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:44.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:48:44.824: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 23 00:48:52.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2885 --namespace=crd-publish-openapi-2885 create -f -' Oct 23 00:48:53.323: INFO: stderr: "" Oct 23 00:48:53.323: INFO: stdout: "e2e-test-crd-publish-openapi-6572-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 23 00:48:53.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2885 --namespace=crd-publish-openapi-2885 delete e2e-test-crd-publish-openapi-6572-crds test-cr' Oct 23 00:48:53.489: INFO: stderr: "" Oct 23 00:48:53.489: INFO: stdout: "e2e-test-crd-publish-openapi-6572-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 23 00:48:53.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2885 --namespace=crd-publish-openapi-2885 apply -f -' Oct 23 00:48:53.844: INFO: stderr: "" Oct 23 00:48:53.844: INFO: stdout: "e2e-test-crd-publish-openapi-6572-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 23 00:48:53.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2885 --namespace=crd-publish-openapi-2885 delete e2e-test-crd-publish-openapi-6572-crds test-cr' Oct 23 00:48:54.021: INFO: stderr: "" Oct 23 00:48:54.021: INFO: stdout: "e2e-test-crd-publish-openapi-6572-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 23 00:48:54.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2885 explain e2e-test-crd-publish-openapi-6572-crds' Oct 23 00:48:54.343: INFO: stderr: "" Oct 23 00:48:54.343: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6572-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:57.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2885" for this suite. • [SLOW TEST:13.143 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":18,"skipped":227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:49.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:48:50.134: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:48:52.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546930, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546930, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546930, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546930, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:48:55.153: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 23 00:48:56.153: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 23 00:48:57.152: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 23 00:48:58.152: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:48:58.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6587" for this suite. STEP: Destroying namespace "webhook-6587-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.349 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":36,"skipped":723,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:47:52.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:47:53.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 23 00:48:00.547: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T00:48:00Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T00:48:00Z]] name:name1 resourceVersion:65183 uid:599952a7-f587-4ad0-abc2-d212ff6b4430] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 23 00:48:10.554: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T00:48:10Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T00:48:10Z]] name:name2 resourceVersion:65448 uid:63caea13-bb34-47d6-ae75-07d414b6bdbd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 23 00:48:20.560: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T00:48:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T00:48:20Z]] name:name1 resourceVersion:65754 uid:599952a7-f587-4ad0-abc2-d212ff6b4430] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 23 00:48:30.565: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T00:48:10Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T00:48:30Z]] name:name2 resourceVersion:66005 uid:63caea13-bb34-47d6-ae75-07d414b6bdbd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 23 00:48:40.572: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T00:48:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T00:48:20Z]] name:name1 resourceVersion:67139 uid:599952a7-f587-4ad0-abc2-d212ff6b4430] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 23 00:48:50.577: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-10-23T00:48:10Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-10-23T00:48:30Z]] name:name2 resourceVersion:68272 uid:63caea13-bb34-47d6-ae75-07d414b6bdbd] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:01.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5420" for this suite. • [SLOW TEST:68.119 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":29,"skipped":472,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:57.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:48:58.033: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f505c55-9b1d-4b94-af47-4b59ca172560" in namespace "projected-1799" to be "Succeeded or Failed" Oct 23 00:48:58.034: INFO: Pod "downwardapi-volume-6f505c55-9b1d-4b94-af47-4b59ca172560": Phase="Pending", Reason="", readiness=false. Elapsed: 1.915031ms Oct 23 00:49:00.038: INFO: Pod "downwardapi-volume-6f505c55-9b1d-4b94-af47-4b59ca172560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005641954s Oct 23 00:49:02.047: INFO: Pod "downwardapi-volume-6f505c55-9b1d-4b94-af47-4b59ca172560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014768771s STEP: Saw pod success Oct 23 00:49:02.047: INFO: Pod "downwardapi-volume-6f505c55-9b1d-4b94-af47-4b59ca172560" satisfied condition "Succeeded or Failed" Oct 23 00:49:02.050: INFO: Trying to get logs from node node1 pod downwardapi-volume-6f505c55-9b1d-4b94-af47-4b59ca172560 container client-container: STEP: delete the pod Oct 23 00:49:02.075: INFO: Waiting for pod downwardapi-volume-6f505c55-9b1d-4b94-af47-4b59ca172560 to disappear Oct 23 00:49:02.077: INFO: Pod downwardapi-volume-6f505c55-9b1d-4b94-af47-4b59ca172560 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:02.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1799" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":253,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:58.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Oct 23 00:48:58.227: INFO: Waiting up to 5m0s for pod "var-expansion-b6374927-8628-4a0e-8506-4fb6f34de26e" in namespace "var-expansion-1614" to be "Succeeded or Failed" Oct 23 00:48:58.230: INFO: Pod "var-expansion-b6374927-8628-4a0e-8506-4fb6f34de26e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.438006ms Oct 23 00:49:00.233: INFO: Pod "var-expansion-b6374927-8628-4a0e-8506-4fb6f34de26e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005939255s Oct 23 00:49:02.238: INFO: Pod "var-expansion-b6374927-8628-4a0e-8506-4fb6f34de26e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010744491s STEP: Saw pod success Oct 23 00:49:02.238: INFO: Pod "var-expansion-b6374927-8628-4a0e-8506-4fb6f34de26e" satisfied condition "Succeeded or Failed" Oct 23 00:49:02.240: INFO: Trying to get logs from node node2 pod var-expansion-b6374927-8628-4a0e-8506-4fb6f34de26e container dapi-container: STEP: delete the pod Oct 23 00:49:02.251: INFO: Waiting for pod var-expansion-b6374927-8628-4a0e-8506-4fb6f34de26e to disappear Oct 23 00:49:02.252: INFO: Pod var-expansion-b6374927-8628-4a0e-8506-4fb6f34de26e no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:02.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1614" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:46:22.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2323 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Oct 23 00:46:23.029: INFO: Found 0 stateful pods, waiting for 3 Oct 23 00:46:33.033: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:46:33.033: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:46:33.033: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 23 00:46:43.034: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:46:43.034: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:46:43.034: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Oct 23 00:46:43.062: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 23 00:46:53.095: INFO: Updating stateful set ss2 Oct 23 00:46:53.099: INFO: Waiting for Pod statefulset-2323/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 00:47:03.109: INFO: Waiting for Pod statefulset-2323/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Oct 23 00:47:13.125: INFO: Found 1 stateful pods, waiting for 3 Oct 23 00:47:23.131: INFO: Found 2 stateful pods, waiting for 3 Oct 23 00:47:33.131: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:47:33.131: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:47:33.131: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 23 00:47:43.131: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:47:43.131: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:47:43.131: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 23 00:47:43.155: INFO: Updating stateful set ss2 Oct 23 00:47:43.160: INFO: Waiting for Pod statefulset-2323/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 00:47:53.169: INFO: Waiting for Pod statefulset-2323/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 00:48:03.186: INFO: Updating stateful set ss2 Oct 23 00:48:03.192: INFO: Waiting for StatefulSet statefulset-2323/ss2 to complete update Oct 23 00:48:03.192: INFO: Waiting for Pod statefulset-2323/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Oct 23 00:48:13.198: INFO: Waiting for StatefulSet statefulset-2323/ss2 to complete update Oct 23 00:48:13.198: INFO: Waiting for Pod statefulset-2323/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 00:48:23.200: INFO: Deleting all statefulset in ns statefulset-2323 Oct 23 00:48:23.202: INFO: Scaling statefulset ss2 to 0 Oct 23 00:49:03.216: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:49:03.218: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:03.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2323" for this suite. • [SLOW TEST:160.243 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":6,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:03.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:49:03.574: INFO: Checking APIGroup: apiregistration.k8s.io Oct 23 00:49:03.575: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 23 00:49:03.575: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.575: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 23 00:49:03.575: INFO: Checking APIGroup: apps Oct 23 00:49:03.576: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 23 00:49:03.576: INFO: Versions found [{apps/v1 v1}] Oct 23 00:49:03.576: INFO: apps/v1 matches apps/v1 Oct 23 00:49:03.576: INFO: Checking APIGroup: events.k8s.io Oct 23 00:49:03.576: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 23 00:49:03.576: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.576: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 23 00:49:03.576: INFO: Checking APIGroup: authentication.k8s.io Oct 23 00:49:03.577: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 23 00:49:03.577: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.577: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 23 00:49:03.577: INFO: Checking APIGroup: authorization.k8s.io Oct 23 00:49:03.578: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 23 00:49:03.578: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.578: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 23 00:49:03.578: INFO: Checking APIGroup: autoscaling Oct 23 00:49:03.579: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 23 00:49:03.579: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 23 00:49:03.579: INFO: autoscaling/v1 matches autoscaling/v1 Oct 23 00:49:03.579: INFO: Checking APIGroup: batch Oct 23 00:49:03.580: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 23 00:49:03.580: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 23 00:49:03.580: INFO: batch/v1 matches batch/v1 Oct 23 00:49:03.580: INFO: Checking APIGroup: certificates.k8s.io Oct 23 00:49:03.580: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 23 00:49:03.580: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.580: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 23 00:49:03.580: INFO: Checking APIGroup: networking.k8s.io Oct 23 00:49:03.581: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 23 00:49:03.581: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.581: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 23 00:49:03.581: INFO: Checking APIGroup: extensions Oct 23 00:49:03.582: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 23 00:49:03.582: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 23 00:49:03.582: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 23 00:49:03.582: INFO: Checking APIGroup: policy Oct 23 00:49:03.583: INFO: PreferredVersion.GroupVersion: policy/v1 Oct 23 00:49:03.583: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Oct 23 00:49:03.583: INFO: policy/v1 matches policy/v1 Oct 23 00:49:03.583: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 23 00:49:03.583: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 23 00:49:03.583: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.583: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 23 00:49:03.583: INFO: Checking APIGroup: storage.k8s.io Oct 23 00:49:03.584: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 23 00:49:03.584: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.584: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 23 00:49:03.584: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 23 00:49:03.585: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 23 00:49:03.585: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.585: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 23 00:49:03.585: INFO: Checking APIGroup: apiextensions.k8s.io Oct 23 00:49:03.586: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 23 00:49:03.586: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.586: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 23 00:49:03.586: INFO: Checking APIGroup: scheduling.k8s.io Oct 23 00:49:03.586: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 23 00:49:03.586: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.586: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 23 00:49:03.586: INFO: Checking APIGroup: coordination.k8s.io Oct 23 00:49:03.587: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 23 00:49:03.587: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.588: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 23 00:49:03.588: INFO: Checking APIGroup: node.k8s.io Oct 23 00:49:03.588: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Oct 23 00:49:03.589: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.589: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Oct 23 00:49:03.589: INFO: Checking APIGroup: discovery.k8s.io Oct 23 00:49:03.590: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Oct 23 00:49:03.590: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.590: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Oct 23 00:49:03.590: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Oct 23 00:49:03.590: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Oct 23 00:49:03.590: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.590: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Oct 23 00:49:03.590: INFO: Checking APIGroup: intel.com Oct 23 00:49:03.591: INFO: PreferredVersion.GroupVersion: intel.com/v1 Oct 23 00:49:03.591: INFO: Versions found [{intel.com/v1 v1}] Oct 23 00:49:03.591: INFO: intel.com/v1 matches intel.com/v1 Oct 23 00:49:03.591: INFO: Checking APIGroup: k8s.cni.cncf.io Oct 23 00:49:03.592: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 Oct 23 00:49:03.592: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] Oct 23 00:49:03.592: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 Oct 23 00:49:03.592: INFO: Checking APIGroup: monitoring.coreos.com Oct 23 00:49:03.593: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 Oct 23 00:49:03.593: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] Oct 23 00:49:03.593: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 Oct 23 00:49:03.593: INFO: Checking APIGroup: telemetry.intel.com Oct 23 00:49:03.593: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 Oct 23 00:49:03.593: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] Oct 23 00:49:03.593: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 Oct 23 00:49:03.593: INFO: Checking APIGroup: custom.metrics.k8s.io Oct 23 00:49:03.594: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 Oct 23 00:49:03.594: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] Oct 23 00:49:03.594: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:03.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-3093" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":7,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:02.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-0e284a44-bd42-4885-89cd-555abef24087 STEP: Creating a pod to test consume secrets Oct 23 00:49:02.132: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66a78484-0a0e-4bf6-bcbe-a3abc62ddebd" in namespace "projected-5726" to be "Succeeded or Failed" Oct 23 00:49:02.136: INFO: Pod "pod-projected-secrets-66a78484-0a0e-4bf6-bcbe-a3abc62ddebd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.79745ms Oct 23 00:49:04.139: INFO: Pod "pod-projected-secrets-66a78484-0a0e-4bf6-bcbe-a3abc62ddebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007051107s Oct 23 00:49:06.144: INFO: Pod "pod-projected-secrets-66a78484-0a0e-4bf6-bcbe-a3abc62ddebd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011809126s STEP: Saw pod success Oct 23 00:49:06.144: INFO: Pod "pod-projected-secrets-66a78484-0a0e-4bf6-bcbe-a3abc62ddebd" satisfied condition "Succeeded or Failed" Oct 23 00:49:06.146: INFO: Trying to get logs from node node1 pod pod-projected-secrets-66a78484-0a0e-4bf6-bcbe-a3abc62ddebd container projected-secret-volume-test: STEP: delete the pod Oct 23 00:49:06.224: INFO: Waiting for pod pod-projected-secrets-66a78484-0a0e-4bf6-bcbe-a3abc62ddebd to disappear Oct 23 00:49:06.226: INFO: Pod pod-projected-secrets-66a78484-0a0e-4bf6-bcbe-a3abc62ddebd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:06.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5726" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":256,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:38.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3096 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 23 00:48:38.033: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 23 00:48:38.063: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:40.067: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:42.067: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:44.068: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:46.066: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:48.069: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:48:50.067: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:48:52.068: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:48:54.067: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:48:56.067: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:48:58.068: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 23 00:48:58.072: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 23 00:49:00.077: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 23 00:49:06.097: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 23 00:49:06.097: INFO: Breadth first check of 10.244.3.76 on host 10.10.190.207... Oct 23 00:49:06.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.95:9080/dial?request=hostname&protocol=http&host=10.244.3.76&port=8080&tries=1'] Namespace:pod-network-test-3096 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:06.099: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:06.189: INFO: Waiting for responses: map[] Oct 23 00:49:06.189: INFO: reached 10.244.3.76 after 0/1 tries Oct 23 00:49:06.189: INFO: Breadth first check of 10.244.4.86 on host 10.10.190.208... Oct 23 00:49:06.192: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.95:9080/dial?request=hostname&protocol=http&host=10.244.4.86&port=8080&tries=1'] Namespace:pod-network-test-3096 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:06.192: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:06.304: INFO: Waiting for responses: map[] Oct 23 00:49:06.304: INFO: reached 10.244.4.86 after 0/1 tries Oct 23 00:49:06.304: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:06.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3096" for this suite. • [SLOW TEST:28.309 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":196,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:06.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:06.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6887" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":16,"skipped":212,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:02.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 23 00:49:02.376: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 23 00:49:07.381: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:08.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9152" for this suite. • [SLOW TEST:6.054 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":38,"skipped":767,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:06.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Oct 23 00:49:06.475: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:08.478: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:10.479: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 23 00:49:11.495: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:12.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8915" for this suite. • [SLOW TEST:6.081 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":17,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:03.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Oct 23 00:49:03.714: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:05.719: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:07.718: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:09.717: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:11.718: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Oct 23 00:49:11.734: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:13.738: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:15.737: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 23 00:49:15.740: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:15.740: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:15.864: INFO: Exec stderr: "" Oct 23 00:49:15.864: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:15.864: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:15.985: INFO: Exec stderr: "" Oct 23 00:49:15.985: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:15.985: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:16.353: INFO: Exec stderr: "" Oct 23 00:49:16.353: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:16.353: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:16.780: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 23 00:49:16.780: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:16.780: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:16.880: INFO: Exec stderr: "" Oct 23 00:49:16.880: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:16.880: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:17.325: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 23 00:49:17.325: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:17.325: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:17.406: INFO: Exec stderr: "" Oct 23 00:49:17.406: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:17.406: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:17.501: INFO: Exec stderr: "" Oct 23 00:49:17.501: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:17.501: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:17.582: INFO: Exec stderr: "" Oct 23 00:49:17.582: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7064 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:17.582: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:17.671: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:17.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7064" for this suite. • [SLOW TEST:14.007 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":179,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:17.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 00:49:17.755: INFO: Waiting up to 5m0s for pod "downward-api-7bcab3e9-ab43-44b3-88c1-061a90535d53" in namespace "downward-api-6956" to be "Succeeded or Failed" Oct 23 00:49:17.757: INFO: Pod "downward-api-7bcab3e9-ab43-44b3-88c1-061a90535d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38863ms Oct 23 00:49:19.760: INFO: Pod "downward-api-7bcab3e9-ab43-44b3-88c1-061a90535d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00578982s Oct 23 00:49:21.764: INFO: Pod "downward-api-7bcab3e9-ab43-44b3-88c1-061a90535d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009533755s STEP: Saw pod success Oct 23 00:49:21.764: INFO: Pod "downward-api-7bcab3e9-ab43-44b3-88c1-061a90535d53" satisfied condition "Succeeded or Failed" Oct 23 00:49:21.766: INFO: Trying to get logs from node node2 pod downward-api-7bcab3e9-ab43-44b3-88c1-061a90535d53 container dapi-container: STEP: delete the pod Oct 23 00:49:21.780: INFO: Waiting for pod downward-api-7bcab3e9-ab43-44b3-88c1-061a90535d53 to disappear Oct 23 00:49:21.782: INFO: Pod downward-api-7bcab3e9-ab43-44b3-88c1-061a90535d53 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:21.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6956" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":195,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:12.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:49:12.656: INFO: The status of Pod server-envvars-f7c5ef24-6edd-49aa-ab7f-f48c463eb250 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:14.662: INFO: The status of Pod server-envvars-f7c5ef24-6edd-49aa-ab7f-f48c463eb250 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:16.661: INFO: The status of Pod server-envvars-f7c5ef24-6edd-49aa-ab7f-f48c463eb250 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:18.660: INFO: The status of Pod server-envvars-f7c5ef24-6edd-49aa-ab7f-f48c463eb250 is Running (Ready = true) Oct 23 00:49:18.682: INFO: Waiting up to 5m0s for pod "client-envvars-8b68304f-75bd-42d8-b85c-3975339a9306" in namespace "pods-891" to be "Succeeded or Failed" Oct 23 00:49:18.685: INFO: Pod "client-envvars-8b68304f-75bd-42d8-b85c-3975339a9306": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250441ms Oct 23 00:49:20.688: INFO: Pod "client-envvars-8b68304f-75bd-42d8-b85c-3975339a9306": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006024747s Oct 23 00:49:22.692: INFO: Pod "client-envvars-8b68304f-75bd-42d8-b85c-3975339a9306": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00959014s STEP: Saw pod success Oct 23 00:49:22.692: INFO: Pod "client-envvars-8b68304f-75bd-42d8-b85c-3975339a9306" satisfied condition "Succeeded or Failed" Oct 23 00:49:22.695: INFO: Trying to get logs from node node2 pod client-envvars-8b68304f-75bd-42d8-b85c-3975339a9306 container env3cont: STEP: delete the pod Oct 23 00:49:22.706: INFO: Waiting for pod client-envvars-8b68304f-75bd-42d8-b85c-3975339a9306 to disappear Oct 23 00:49:22.708: INFO: Pod client-envvars-8b68304f-75bd-42d8-b85c-3975339a9306 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:22.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-891" for this suite. • [SLOW TEST:10.099 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:06.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Oct 23 00:49:06.287: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 00:49:06.287: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 00:49:06.290: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 00:49:06.290: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 00:49:06.299: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 00:49:06.299: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 00:49:06.319: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 00:49:06.319: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 and labels map[test-deployment-static:true] Oct 23 00:49:09.732: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 and labels map[test-deployment-static:true] Oct 23 00:49:09.732: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 and labels map[test-deployment-static:true] Oct 23 00:49:09.738: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Oct 23 00:49:09.744: INFO: observed event type ADDED STEP: waiting for Replicas to scale Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 0 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:09.746: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:09.749: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:09.749: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:09.756: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:09.756: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:09.763: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:09.763: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:09.769: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:09.769: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:14.358: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:14.358: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:14.373: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 STEP: listing Deployments Oct 23 00:49:14.376: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Oct 23 00:49:14.388: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Oct 23 00:49:14.394: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:14.394: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:14.398: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:14.405: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:14.411: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:18.467: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:18.483: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:18.497: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:18.503: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Oct 23 00:49:22.779: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Oct 23 00:49:22.807: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:22.807: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:22.807: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:22.807: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:22.807: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 1 Oct 23 00:49:22.807: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:22.808: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 3 Oct 23 00:49:22.808: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:22.808: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 2 Oct 23 00:49:22.808: INFO: observed Deployment test-deployment in namespace deployment-2406 with ReadyReplicas 3 STEP: deleting the Deployment Oct 23 00:49:22.814: INFO: observed event type MODIFIED Oct 23 00:49:22.814: INFO: observed event type MODIFIED Oct 23 00:49:22.814: INFO: observed event type MODIFIED Oct 23 00:49:22.815: INFO: observed event type MODIFIED Oct 23 00:49:22.815: INFO: observed event type MODIFIED Oct 23 00:49:22.815: INFO: observed event type MODIFIED Oct 23 00:49:22.815: INFO: observed event type MODIFIED Oct 23 00:49:22.815: INFO: observed event type MODIFIED Oct 23 00:49:22.815: INFO: observed event type MODIFIED Oct 23 00:49:22.815: INFO: observed event type MODIFIED Oct 23 00:49:22.815: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 00:49:22.819: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:22.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2406" for this suite. • [SLOW TEST:16.575 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":21,"skipped":265,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:21.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Oct 23 00:49:21.856: INFO: Waiting up to 5m0s for pod "var-expansion-3d7d3f79-859e-4724-b956-df09312443e9" in namespace "var-expansion-6839" to be "Succeeded or Failed" Oct 23 00:49:21.858: INFO: Pod "var-expansion-3d7d3f79-859e-4724-b956-df09312443e9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.924683ms Oct 23 00:49:23.862: INFO: Pod "var-expansion-3d7d3f79-859e-4724-b956-df09312443e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006017658s Oct 23 00:49:25.866: INFO: Pod "var-expansion-3d7d3f79-859e-4724-b956-df09312443e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009502419s Oct 23 00:49:27.870: INFO: Pod "var-expansion-3d7d3f79-859e-4724-b956-df09312443e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01402075s STEP: Saw pod success Oct 23 00:49:27.870: INFO: Pod "var-expansion-3d7d3f79-859e-4724-b956-df09312443e9" satisfied condition "Succeeded or Failed" Oct 23 00:49:27.873: INFO: Trying to get logs from node node2 pod var-expansion-3d7d3f79-859e-4724-b956-df09312443e9 container dapi-container: STEP: delete the pod Oct 23 00:49:27.886: INFO: Waiting for pod var-expansion-3d7d3f79-859e-4724-b956-df09312443e9 to disappear Oct 23 00:49:27.888: INFO: Pod var-expansion-3d7d3f79-859e-4724-b956-df09312443e9 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:27.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6839" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":10,"skipped":207,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:27.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:27.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9690" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":11,"skipped":221,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:22.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:49:22.798: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Oct 23 00:49:22.812: INFO: The status of Pod pod-exec-websocket-706cac3e-c096-4de7-8f49-9d5d302e2b41 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:24.816: INFO: The status of Pod pod-exec-websocket-706cac3e-c096-4de7-8f49-9d5d302e2b41 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:26.816: INFO: The status of Pod pod-exec-websocket-706cac3e-c096-4de7-8f49-9d5d302e2b41 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:28.815: INFO: The status of Pod pod-exec-websocket-706cac3e-c096-4de7-8f49-9d5d302e2b41 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:28.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6976" for this suite. • [SLOW TEST:6.130 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:22.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Oct 23 00:49:22.865: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:29.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1420" for this suite. • [SLOW TEST:6.409 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":22,"skipped":270,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:01.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4673 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 23 00:49:01.141: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 23 00:49:01.169: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:03.174: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:05.175: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:07.175: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:49:09.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:49:11.174: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:49:13.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:49:15.176: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:49:17.174: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:49:19.173: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:49:21.174: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:49:23.174: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 23 00:49:23.178: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 23 00:49:29.221: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 23 00:49:29.221: INFO: Going to poll 10.244.3.79 on port 8080 at least 0 times, with a maximum of 34 tries before failing Oct 23 00:49:29.226: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.79:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4673 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:29.226: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:29.312: INFO: Found all 1 expected endpoints: [netserver-0] Oct 23 00:49:29.312: INFO: Going to poll 10.244.4.96 on port 8080 at least 0 times, with a maximum of 34 tries before failing Oct 23 00:49:29.314: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.96:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4673 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:29.314: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:49:29.420: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:29.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4673" for this suite. • [SLOW TEST:28.311 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":480,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:29.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:49:29.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c40ae384-b947-4af8-b1a8-180c852efe53" in namespace "projected-9850" to be "Succeeded or Failed" Oct 23 00:49:29.319: INFO: Pod "downwardapi-volume-c40ae384-b947-4af8-b1a8-180c852efe53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00416ms Oct 23 00:49:31.322: INFO: Pod "downwardapi-volume-c40ae384-b947-4af8-b1a8-180c852efe53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006885044s Oct 23 00:49:33.327: INFO: Pod "downwardapi-volume-c40ae384-b947-4af8-b1a8-180c852efe53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011592578s STEP: Saw pod success Oct 23 00:49:33.327: INFO: Pod "downwardapi-volume-c40ae384-b947-4af8-b1a8-180c852efe53" satisfied condition "Succeeded or Failed" Oct 23 00:49:33.329: INFO: Trying to get logs from node node1 pod downwardapi-volume-c40ae384-b947-4af8-b1a8-180c852efe53 container client-container: STEP: delete the pod Oct 23 00:49:33.342: INFO: Waiting for pod downwardapi-volume-c40ae384-b947-4af8-b1a8-180c852efe53 to disappear Oct 23 00:49:33.343: INFO: Pod downwardapi-volume-c40ae384-b947-4af8-b1a8-180c852efe53 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:33.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9850" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":271,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:28.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-e7a0800c-4c42-4686-833f-c3f6c7428fc6 STEP: Creating a pod to test consume secrets Oct 23 00:49:29.011: INFO: Waiting up to 5m0s for pod "pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f" in namespace "secrets-9662" to be "Succeeded or Failed" Oct 23 00:49:29.013: INFO: Pod "pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.873902ms Oct 23 00:49:31.016: INFO: Pod "pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005396137s Oct 23 00:49:33.020: INFO: Pod "pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008961015s Oct 23 00:49:35.024: INFO: Pod "pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012745436s STEP: Saw pod success Oct 23 00:49:35.024: INFO: Pod "pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f" satisfied condition "Succeeded or Failed" Oct 23 00:49:35.026: INFO: Trying to get logs from node node2 pod pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f container secret-volume-test: STEP: delete the pod Oct 23 00:49:35.038: INFO: Waiting for pod pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f to disappear Oct 23 00:49:35.040: INFO: Pod pod-secrets-8ded9e2e-5057-459e-918b-2efd8d98424f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:35.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9662" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":326,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:29.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Oct 23 00:49:29.522: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint Oct 23 00:49:31.532: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Oct 23 00:49:33.541: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:35.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-5014" for this suite. • [SLOW TEST:6.079 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":31,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:28.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Oct 23 00:49:30.079: INFO: running pods: 0 < 3 Oct 23 00:49:32.086: INFO: running pods: 0 < 3 Oct 23 00:49:34.084: INFO: running pods: 0 < 3 Oct 23 00:49:36.084: INFO: running pods: 2 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:38.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-151" for this suite. • [SLOW TEST:10.080 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":12,"skipped":237,"failed":0} SSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:33.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:49:33.383: INFO: Creating pod... Oct 23 00:49:33.397: INFO: Pod Quantity: 1 Status: Pending Oct 23 00:49:34.401: INFO: Pod Quantity: 1 Status: Pending Oct 23 00:49:35.401: INFO: Pod Quantity: 1 Status: Pending Oct 23 00:49:36.400: INFO: Pod Quantity: 1 Status: Pending Oct 23 00:49:37.401: INFO: Pod Quantity: 1 Status: Pending Oct 23 00:49:38.402: INFO: Pod Status: Running Oct 23 00:49:38.402: INFO: Creating service... Oct 23 00:49:38.408: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/pods/agnhost/proxy/some/path/with/DELETE Oct 23 00:49:38.411: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Oct 23 00:49:38.411: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/pods/agnhost/proxy/some/path/with/GET Oct 23 00:49:38.413: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Oct 23 00:49:38.413: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/pods/agnhost/proxy/some/path/with/HEAD Oct 23 00:49:38.415: INFO: http.Client request:HEAD | StatusCode:200 Oct 23 00:49:38.415: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/pods/agnhost/proxy/some/path/with/OPTIONS Oct 23 00:49:38.417: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Oct 23 00:49:38.417: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/pods/agnhost/proxy/some/path/with/PATCH Oct 23 00:49:38.419: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Oct 23 00:49:38.419: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/pods/agnhost/proxy/some/path/with/POST Oct 23 00:49:38.422: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Oct 23 00:49:38.422: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/pods/agnhost/proxy/some/path/with/PUT Oct 23 00:49:38.424: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Oct 23 00:49:38.424: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/services/test-service/proxy/some/path/with/DELETE Oct 23 00:49:38.428: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Oct 23 00:49:38.428: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/services/test-service/proxy/some/path/with/GET Oct 23 00:49:38.431: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Oct 23 00:49:38.431: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/services/test-service/proxy/some/path/with/HEAD Oct 23 00:49:38.434: INFO: http.Client request:HEAD | StatusCode:200 Oct 23 00:49:38.434: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/services/test-service/proxy/some/path/with/OPTIONS Oct 23 00:49:38.438: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Oct 23 00:49:38.438: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/services/test-service/proxy/some/path/with/PATCH Oct 23 00:49:38.440: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Oct 23 00:49:38.440: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/services/test-service/proxy/some/path/with/POST Oct 23 00:49:38.443: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Oct 23 00:49:38.443: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-962/services/test-service/proxy/some/path/with/PUT Oct 23 00:49:38.447: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:38.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-962" for this suite. • [SLOW TEST:5.093 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":24,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:35.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 23 00:49:35.119: INFO: Waiting up to 5m0s for pod "pod-502c920f-9ef8-4f7b-9b95-e2b1dd592c90" in namespace "emptydir-5063" to be "Succeeded or Failed" Oct 23 00:49:35.122: INFO: Pod "pod-502c920f-9ef8-4f7b-9b95-e2b1dd592c90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843252ms Oct 23 00:49:37.125: INFO: Pod "pod-502c920f-9ef8-4f7b-9b95-e2b1dd592c90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006057278s Oct 23 00:49:39.128: INFO: Pod "pod-502c920f-9ef8-4f7b-9b95-e2b1dd592c90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009480769s STEP: Saw pod success Oct 23 00:49:39.129: INFO: Pod "pod-502c920f-9ef8-4f7b-9b95-e2b1dd592c90" satisfied condition "Succeeded or Failed" Oct 23 00:49:39.131: INFO: Trying to get logs from node node2 pod pod-502c920f-9ef8-4f7b-9b95-e2b1dd592c90 container test-container: STEP: delete the pod Oct 23 00:49:39.144: INFO: Waiting for pod pod-502c920f-9ef8-4f7b-9b95-e2b1dd592c90 to disappear Oct 23 00:49:39.146: INFO: Pod pod-502c920f-9ef8-4f7b-9b95-e2b1dd592c90 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:39.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5063" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:38.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 00:49:38.542: INFO: Waiting up to 5m0s for pod "downward-api-ce28c75e-ce5c-40e3-81b7-c3589c7281b5" in namespace "downward-api-4861" to be "Succeeded or Failed" Oct 23 00:49:38.545: INFO: Pod "downward-api-ce28c75e-ce5c-40e3-81b7-c3589c7281b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266874ms Oct 23 00:49:40.549: INFO: Pod "downward-api-ce28c75e-ce5c-40e3-81b7-c3589c7281b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006696977s Oct 23 00:49:42.553: INFO: Pod "downward-api-ce28c75e-ce5c-40e3-81b7-c3589c7281b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010809927s STEP: Saw pod success Oct 23 00:49:42.553: INFO: Pod "downward-api-ce28c75e-ce5c-40e3-81b7-c3589c7281b5" satisfied condition "Succeeded or Failed" Oct 23 00:49:42.556: INFO: Trying to get logs from node node2 pod downward-api-ce28c75e-ce5c-40e3-81b7-c3589c7281b5 container dapi-container: STEP: delete the pod Oct 23 00:49:42.569: INFO: Waiting for pod downward-api-ce28c75e-ce5c-40e3-81b7-c3589c7281b5 to disappear Oct 23 00:49:42.571: INFO: Pod downward-api-ce28c75e-ce5c-40e3-81b7-c3589c7281b5 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:42.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4861" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":299,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:38.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-a88d6570-7c55-4c46-928d-a1845e38d7e7 STEP: Creating a pod to test consume secrets Oct 23 00:49:38.143: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5" in namespace "projected-1140" to be "Succeeded or Failed" Oct 23 00:49:38.147: INFO: Pod "pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.91777ms Oct 23 00:49:40.150: INFO: Pod "pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006746413s Oct 23 00:49:42.155: INFO: Pod "pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011556951s Oct 23 00:49:44.159: INFO: Pod "pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015515612s STEP: Saw pod success Oct 23 00:49:44.159: INFO: Pod "pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5" satisfied condition "Succeeded or Failed" Oct 23 00:49:44.162: INFO: Trying to get logs from node node1 pod pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5 container projected-secret-volume-test: STEP: delete the pod Oct 23 00:49:44.175: INFO: Waiting for pod pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5 to disappear Oct 23 00:49:44.177: INFO: Pod pod-projected-secrets-f5aae606-4c70-43f7-a14a-bad11eabafe5 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:44.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1140" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:44.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 23 00:49:44.849: INFO: starting watch STEP: patching STEP: updating Oct 23 00:49:44.858: INFO: waiting for watch events with expected annotations Oct 23 00:49:44.858: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:44.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-6992" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":14,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:42.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-fe9a1f8a-e250-491a-8062-93d16cc2ca84 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:52.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9372" for this suite. • [SLOW TEST:10.083 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":303,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:44:39.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7178 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7178 STEP: Creating statefulset with conflicting port in namespace statefulset-7178 STEP: Waiting until pod test-pod will start running in namespace statefulset-7178 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7178 Oct 23 00:49:43.363: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000703500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000703500, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 00:49:43.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7178 describe po test-pod' Oct 23 00:49:43.559: INFO: stderr: "" Oct 23 00:49:43.559: INFO: stdout: "Name: test-pod\nNamespace: statefulset-7178\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Sat, 23 Oct 2021 00:44:39 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.16\"\n ],\n \"mac\": \"62:c9:30:3b:81:f5\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.16\"\n ],\n \"mac\": \"62:c9:30:3b:81:f5\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.16\nIPs:\n IP: 10.244.3.16\nContainers:\n webserver:\n Container ID: docker://4bd8423dc32c361a3cb2dcd5a3f2c6de465ba523d45785868b141d22b2b6502a\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Sat, 23 Oct 2021 00:44:41 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mvgdt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-mvgdt:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 320.47333ms\n Normal Created 5m2s kubelet Created container webserver\n Normal Started 5m2s kubelet Started container webserver\n" Oct 23 00:49:43.559: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-7178 Priority: 0 Node: node1/10.10.190.207 Start Time: Sat, 23 Oct 2021 00:44:39 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.16" ], "mac": "62:c9:30:3b:81:f5", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.16" ], "mac": "62:c9:30:3b:81:f5", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.16 IPs: IP: 10.244.3.16 Containers: webserver: Container ID: docker://4bd8423dc32c361a3cb2dcd5a3f2c6de465ba523d45785868b141d22b2b6502a Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Sat, 23 Oct 2021 00:44:41 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mvgdt (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-mvgdt: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m2s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m2s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 320.47333ms Normal Created 5m2s kubelet Created container webserver Normal Started 5m2s kubelet Started container webserver Oct 23 00:49:43.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-7178 logs test-pod --tail=100' Oct 23 00:49:43.863: INFO: stderr: "" Oct 23 00:49:43.863: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.16. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.16. Set the 'ServerName' directive globally to suppress this message\n[Sat Oct 23 00:44:41.666535 2021] [mpm_event:notice] [pid 1:tid 140174438173544] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Oct 23 00:44:41.666572 2021] [core:notice] [pid 1:tid 140174438173544] AH00094: Command line: 'httpd -D FOREGROUND'\n" Oct 23 00:49:43.863: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.16. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.16. Set the 'ServerName' directive globally to suppress this message [Sat Oct 23 00:44:41.666535 2021] [mpm_event:notice] [pid 1:tid 140174438173544] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Sat Oct 23 00:44:41.666572 2021] [core:notice] [pid 1:tid 140174438173544] AH00094: Command line: 'httpd -D FOREGROUND' Oct 23 00:49:43.863: INFO: Deleting all statefulset in ns statefulset-7178 Oct 23 00:49:43.866: INFO: Scaling statefulset ss to 0 Oct 23 00:49:43.873: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:49:53.881: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-7178". STEP: Found 7 events. Oct 23 00:49:53.893: INFO: At 2021-10-23 00:44:39 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] Oct 23 00:49:53.893: INFO: At 2021-10-23 00:44:39 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100]] Oct 23 00:49:53.893: INFO: At 2021-10-23 00:44:41 +0000 UTC - event for test-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Oct 23 00:49:53.893: INFO: At 2021-10-23 00:44:41 +0000 UTC - event for test-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 320.47333ms Oct 23 00:49:53.893: INFO: At 2021-10-23 00:44:41 +0000 UTC - event for test-pod: {kubelet node1} Created: Created container webserver Oct 23 00:49:53.893: INFO: At 2021-10-23 00:44:41 +0000 UTC - event for test-pod: {kubelet node1} Started: Started container webserver Oct 23 00:49:53.893: INFO: At 2021-10-23 00:44:44 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] Oct 23 00:49:53.895: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:49:53.895: INFO: test-pod node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:44:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:44:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:44:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:44:39 +0000 UTC }] Oct 23 00:49:53.895: INFO: Oct 23 00:49:53.899: INFO: Logging node info for node master1 Oct 23 00:49:53.902: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 69831 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:43 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:43 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:43 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:49:43 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:49:53.902: INFO: Logging kubelet events for node master1 Oct 23 00:49:53.904: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 00:49:53.925: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:53.925: INFO: Container docker-registry ready: true, restart count 0 Oct 23 00:49:53.925: INFO: Container nginx ready: true, restart count 0 Oct 23 00:49:53.925: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:53.925: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:49:53.925: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:49:53.925: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:53.925: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:49:53.925: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:53.925: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 00:49:53.925: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:53.925: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:49:53.925: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:49:53.925: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:49:53.925: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:49:53.925: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:53.925: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:49:53.925: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:53.925: INFO: Container coredns ready: true, restart count 2 Oct 23 00:49:53.925: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:53.925: INFO: Container kube-scheduler ready: true, restart count 0 W1023 00:49:53.940575 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:49:54.011: INFO: Latency metrics for node master1 Oct 23 00:49:54.011: INFO: Logging node info for node master2 Oct 23 00:49:54.014: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 69959 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:50 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:50 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:50 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:49:50 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:49:54.014: INFO: Logging kubelet events for node master2 Oct 23 00:49:54.016: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 00:49:54.026: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.026: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:49:54.026: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.026: INFO: Container autoscaler ready: true, restart count 1 Oct 23 00:49:54.026: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:54.026: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:49:54.026: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:49:54.026: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.026: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:49:54.026: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.026: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:49:54.026: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.026: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:49:54.026: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:49:54.026: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:49:54.026: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:49:54.026: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.026: INFO: Container kube-multus ready: true, restart count 1 W1023 00:49:54.042232 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:49:54.113: INFO: Latency metrics for node master2 Oct 23 00:49:54.113: INFO: Logging node info for node master3 Oct 23 00:49:54.117: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 69987 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:49:53 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:49:54.117: INFO: Logging kubelet events for node master3 Oct 23 00:49:54.120: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 00:49:54.130: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.130: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:49:54.130: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:49:54.130: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:49:54.130: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:49:54.130: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.130: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 00:49:54.130: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:54.130: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:49:54.130: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:49:54.130: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.130: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:49:54.130: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.130: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:49:54.130: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.130: INFO: Container coredns ready: true, restart count 2 Oct 23 00:49:54.130: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.130: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:49:54.130: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.130: INFO: Container kube-multus ready: true, restart count 1 W1023 00:49:54.146213 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:49:54.214: INFO: Latency metrics for node master3 Oct 23 00:49:54.214: INFO: Logging node info for node node1 Oct 23 00:49:54.217: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 69941 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:17:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:49:48 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:49:54.217: INFO: Logging kubelet events for node node1 Oct 23 00:49:54.220: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 00:49:54.238: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:49:54.238: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 00:49:54.238: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 00:49:54.238: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:54.238: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:49:54.238: INFO: pod3 started at 2021-10-23 00:49:43 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container agnhost ready: true, restart count 0 Oct 23 00:49:54.238: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:49:54.238: INFO: test-pod started at 2021-10-23 00:44:39 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container webserver ready: true, restart count 0 Oct 23 00:49:54.238: INFO: affinity-nodeport-62bdl started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 00:49:54.238: INFO: pod1 started at 2021-10-23 00:49:35 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container agnhost ready: true, restart count 0 Oct 23 00:49:54.238: INFO: pod-configmaps-e96dd23b-192b-49e9-b401-87750fda05ec started at 2021-10-23 00:49:42 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:54.238: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container configmap-volume-binary-test ready: false, restart count 0 Oct 23 00:49:54.238: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 00:49:54.238: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 00:49:54.238: INFO: Container config-reloader ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container grafana ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container prometheus ready: true, restart count 1 Oct 23 00:49:54.238: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:49:54.238: INFO: Container collectd ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:49:54.238: INFO: externalname-service-9rwls started at 2021-10-23 00:48:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container externalname-service ready: true, restart count 0 Oct 23 00:49:54.238: INFO: pod2 started at 2021-10-23 00:49:39 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container agnhost ready: true, restart count 0 Oct 23 00:49:54.238: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:54.238: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 00:49:54.238: INFO: pod-logs-websocket-c691eb06-80eb-4c74-b312-7e34ddc3e57a started at 2021-10-23 00:49:52 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container main ready: false, restart count 0 Oct 23 00:49:54.238: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:49:54.238: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:49:54.238: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:49:54.238: INFO: e2e-host-exec started at 2021-10-23 00:49:51 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container e2e-host-exec ready: true, restart count 0 Oct 23 00:49:54.238: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 00:49:54.238: INFO: Container discover ready: false, restart count 0 Oct 23 00:49:54.238: INFO: Container init ready: false, restart count 0 Oct 23 00:49:54.238: INFO: Container install ready: false, restart count 0 Oct 23 00:49:54.238: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:54.238: INFO: Container nodereport ready: true, restart count 0 Oct 23 00:49:54.238: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:49:54.238: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:49:54.238: INFO: var-expansion-2338606e-54e6-4a7c-aeb5-b1302ae12ca9 started at 2021-10-23 00:49:39 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.238: INFO: Container dapi-container ready: true, restart count 0 W1023 00:49:54.253466 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:49:54.611: INFO: Latency metrics for node node1 Oct 23 00:49:54.611: INFO: Logging node info for node node2 Oct 23 00:49:54.614: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 69940 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:49:48 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:49:48 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:49:54.614: INFO: Logging kubelet events for node node2 Oct 23 00:49:54.616: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 00:49:54.635: INFO: execpod2728q started at 2021-10-23 00:48:49 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.635: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:49:54.635: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:49:54.635: INFO: Container collectd ready: true, restart count 0 Oct 23 00:49:54.635: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:49:54.635: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:49:54.635: INFO: affinity-nodeport-mrgw7 started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.635: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 00:49:54.635: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:49:54.636: INFO: simpletest.deployment-9858f564d-fx68z started at 2021-10-23 00:48:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container nginx ready: true, restart count 0 Oct 23 00:49:54.636: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:54.636: INFO: Container nodereport ready: true, restart count 1 Oct 23 00:49:54.636: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:49:54.636: INFO: pod-exec-websocket-706cac3e-c096-4de7-8f49-9d5d302e2b41 started at 2021-10-23 00:49:22 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container main ready: true, restart count 0 Oct 23 00:49:54.636: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 00:49:54.636: INFO: execpod-affinityr5cmp started at 2021-10-23 00:48:49 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:49:54.636: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:49:54.636: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 00:49:54.636: INFO: Container discover ready: false, restart count 0 Oct 23 00:49:54.636: INFO: Container init ready: false, restart count 0 Oct 23 00:49:54.636: INFO: Container install ready: false, restart count 0 Oct 23 00:49:54.636: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:49:54.636: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:49:54.636: INFO: liveness-74269a25-835f-43db-b118-83c14de7aad3 started at 2021-10-23 00:45:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:49:54.636: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:49:54.636: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:49:54.636: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:49:54.636: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:49:54.636: INFO: externalname-service-szgbk started at 2021-10-23 00:48:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container externalname-service ready: true, restart count 0 Oct 23 00:49:54.636: INFO: my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939-dv97h started at 2021-10-23 00:49:44 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939 ready: true, restart count 0 Oct 23 00:49:54.636: INFO: affinity-nodeport-tpnkv started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 00:49:54.636: INFO: simpletest.deployment-9858f564d-6dxc9 started at 2021-10-23 00:48:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Container nginx ready: true, restart count 0 Oct 23 00:49:54.636: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:49:54.636: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:49:54.636: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 00:49:54.636: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 00:49:54.637: INFO: Container tas-extender ready: true, restart count 0 W1023 00:49:54.650706 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:49:54.954: INFO: Latency metrics for node node2 Oct 23 00:49:54.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7178" for this suite. • Failure [315.661 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:49:43.363: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":10,"skipped":296,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:44.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:49:44.983: INFO: Creating ReplicaSet my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939 Oct 23 00:49:44.990: INFO: Pod name my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939: Found 0 pods out of 1 Oct 23 00:49:49.994: INFO: Pod name my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939: Found 1 pods out of 1 Oct 23 00:49:49.994: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939" is running Oct 23 00:49:49.996: INFO: Pod "my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939-dv97h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 00:49:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 00:49:47 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 00:49:47 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 00:49:44 +0000 UTC Reason: Message:}]) Oct 23 00:49:49.997: INFO: Trying to dial the pod Oct 23 00:49:55.006: INFO: Controller my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939: Got expected result from replica 1 [my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939-dv97h]: "my-hostname-basic-e4cf4df3-64de-4ff1-9305-f2907c405939-dv97h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:55.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8584" for this suite. • [SLOW TEST:10.054 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":15,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:54.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Oct 23 00:49:55.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2366 api-versions' Oct 23 00:49:55.138: INFO: stderr: "" Oct 23 00:49:55.138: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:55.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2366" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":11,"skipped":303,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:52.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:49:52.727: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Oct 23 00:49:52.741: INFO: The status of Pod pod-logs-websocket-c691eb06-80eb-4c74-b312-7e34ddc3e57a is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:54.744: INFO: The status of Pod pod-logs-websocket-c691eb06-80eb-4c74-b312-7e34ddc3e57a is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:56.746: INFO: The status of Pod pod-logs-websocket-c691eb06-80eb-4c74-b312-7e34ddc3e57a is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:56.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-964" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:54.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1023 00:48:55.284724 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:49:57.301: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:57.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9667" for this suite. • [SLOW TEST:63.088 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":18,"skipped":221,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:35.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Oct 23 00:49:35.748: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:37.752: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:39.752: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled Oct 23 00:49:39.776: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:41.781: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:43.781: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides Oct 23 00:49:43.794: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:45.798: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:47.801: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:49.797: INFO: The status of Pod pod3 is Running (Ready = false) Oct 23 00:49:51.798: INFO: The status of Pod pod3 is Running (Ready = true) Oct 23 00:49:51.811: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:49:53.814: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Oct 23 00:49:53.816: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.207 http://127.0.0.1:54323/hostname] Namespace:hostport-6629 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:53.816: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 Oct 23 00:49:54.306: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.207:54323/hostname] Namespace:hostport-6629 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:54.306: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.207, port: 54323 UDP Oct 23 00:49:54.415: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.207 54323] Namespace:hostport-6629 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:54.415: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:49:59.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-6629" for this suite. • [SLOW TEST:23.822 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":575,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:45:56.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-74269a25-835f-43db-b118-83c14de7aad3 in namespace container-probe-3681 Oct 23 00:46:00.654: INFO: Started pod liveness-74269a25-835f-43db-b118-83c14de7aad3 in namespace container-probe-3681 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 00:46:00.656: INFO: Initial restart count of pod liveness-74269a25-835f-43db-b118-83c14de7aad3 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:01.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3681" for this suite. • [SLOW TEST:244.582 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":136,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:55.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-691/configmap-test-6b46259a-b51f-47fa-a99d-1c2a00a45b36 STEP: Creating a pod to test consume configMaps Oct 23 00:49:55.210: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201" in namespace "configmap-691" to be "Succeeded or Failed" Oct 23 00:49:55.212: INFO: Pod "pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621876ms Oct 23 00:49:57.216: INFO: Pod "pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005927293s Oct 23 00:49:59.219: INFO: Pod "pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009241299s Oct 23 00:50:01.222: INFO: Pod "pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012174387s STEP: Saw pod success Oct 23 00:50:01.222: INFO: Pod "pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201" satisfied condition "Succeeded or Failed" Oct 23 00:50:01.224: INFO: Trying to get logs from node node2 pod pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201 container env-test: STEP: delete the pod Oct 23 00:50:01.235: INFO: Waiting for pod pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201 to disappear Oct 23 00:50:01.237: INFO: Pod pod-configmaps-9f200d9e-24f4-4124-9f96-782ba99c0201 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:01.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-691" for this suite. • [SLOW TEST:6.073 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":313,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:01.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:01.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2362" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":14,"skipped":138,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:59.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:49:59.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04" in namespace "projected-8622" to be "Succeeded or Failed" Oct 23 00:49:59.590: INFO: Pod "downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602233ms Oct 23 00:50:01.596: INFO: Pod "downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012042589s Oct 23 00:50:03.600: INFO: Pod "downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01662125s Oct 23 00:50:05.605: INFO: Pod "downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021132288s Oct 23 00:50:07.609: INFO: Pod "downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.024959044s STEP: Saw pod success Oct 23 00:50:07.609: INFO: Pod "downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04" satisfied condition "Succeeded or Failed" Oct 23 00:50:07.611: INFO: Trying to get logs from node node2 pod downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04 container client-container: STEP: delete the pod Oct 23 00:50:07.623: INFO: Waiting for pod downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04 to disappear Oct 23 00:50:07.627: INFO: Pod downwardapi-volume-50522504-2789-43ce-a12d-a257b3d74e04 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:07.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8622" for this suite. • [SLOW TEST:8.085 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":584,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:07.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 23 00:50:07.681: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5096 8dbbc882-d19b-42d3-9c1c-b4e2618ed294 70400 0 2021-10-23 00:50:07 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-23 00:50:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:50:07.681: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5096 8dbbc882-d19b-42d3-9c1c-b4e2618ed294 70401 0 2021-10-23 00:50:07 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-23 00:50:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 23 00:50:07.690: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5096 8dbbc882-d19b-42d3-9c1c-b4e2618ed294 70402 0 2021-10-23 00:50:07 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-23 00:50:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 23 00:50:07.690: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5096 8dbbc882-d19b-42d3-9c1c-b4e2618ed294 70403 0 2021-10-23 00:50:07 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-10-23 00:50:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:07.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5096" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":34,"skipped":589,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:07.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Oct 23 00:50:07.756: INFO: Found Service test-service-cm6vr in namespace services-2918 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Oct 23 00:50:07.756: INFO: Service test-service-cm6vr created STEP: Getting /status Oct 23 00:50:07.759: INFO: Service test-service-cm6vr has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Oct 23 00:50:07.764: INFO: observed Service test-service-cm6vr in namespace services-2918 with annotations: map[] & LoadBalancer: {[]} Oct 23 00:50:07.764: INFO: Found Service test-service-cm6vr in namespace services-2918 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Oct 23 00:50:07.764: INFO: Service test-service-cm6vr has service status patched STEP: updating the ServiceStatus Oct 23 00:50:07.769: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Oct 23 00:50:07.770: INFO: Observed Service test-service-cm6vr in namespace services-2918 with annotations: map[] & Conditions: {[]} Oct 23 00:50:07.770: INFO: Observed event: &Service{ObjectMeta:{test-service-cm6vr services-2918 5f8ed382-b10c-4f9f-8ed4-da6f14f49293 70413 0 2021-10-23 00:50:07 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-10-23 00:50:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.45.193,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.45.193],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Oct 23 00:50:07.770: INFO: Found Service test-service-cm6vr in namespace services-2918 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Oct 23 00:50:07.770: INFO: Service test-service-cm6vr has service status updated STEP: patching the service STEP: watching for the Service to be patched Oct 23 00:50:07.782: INFO: observed Service test-service-cm6vr in namespace services-2918 with labels: map[test-service-static:true] Oct 23 00:50:07.782: INFO: observed Service test-service-cm6vr in namespace services-2918 with labels: map[test-service-static:true] Oct 23 00:50:07.782: INFO: observed Service test-service-cm6vr in namespace services-2918 with labels: map[test-service-static:true] Oct 23 00:50:07.782: INFO: Found Service test-service-cm6vr in namespace services-2918 with labels: map[test-service:patched test-service-static:true] Oct 23 00:50:07.782: INFO: Service test-service-cm6vr patched STEP: deleting the service STEP: watching for the Service to be deleted Oct 23 00:50:07.792: INFO: Observed event: ADDED Oct 23 00:50:07.792: INFO: Observed event: MODIFIED Oct 23 00:50:07.792: INFO: Observed event: MODIFIED Oct 23 00:50:07.792: INFO: Observed event: MODIFIED Oct 23 00:50:07.792: INFO: Found Service test-service-cm6vr in namespace services-2918 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Oct 23 00:50:07.792: INFO: Service test-service-cm6vr deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:07.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2918" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":35,"skipped":600,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:55.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:09.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1782" for this suite. • [SLOW TEST:14.040 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":16,"skipped":304,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:01.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 00:50:01.290: INFO: Waiting up to 5m0s for pod "downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4" in namespace "downward-api-3838" to be "Succeeded or Failed" Oct 23 00:50:01.293: INFO: Pod "downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682287ms Oct 23 00:50:03.296: INFO: Pod "downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005758392s Oct 23 00:50:05.300: INFO: Pod "downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010446692s Oct 23 00:50:07.305: INFO: Pod "downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014533409s Oct 23 00:50:09.308: INFO: Pod "downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017788034s STEP: Saw pod success Oct 23 00:50:09.308: INFO: Pod "downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4" satisfied condition "Succeeded or Failed" Oct 23 00:50:09.310: INFO: Trying to get logs from node node2 pod downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4 container dapi-container: STEP: delete the pod Oct 23 00:50:09.335: INFO: Waiting for pod downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4 to disappear Oct 23 00:50:09.340: INFO: Pod downward-api-f83767dd-1f1b-437d-9768-21f8cce17fc4 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:09.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3838" for this suite. • [SLOW TEST:8.100 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":315,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:01.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:50:01.342: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9ae39bf4-acfa-4352-a9ab-49ea9098b69c" in namespace "security-context-test-346" to be "Succeeded or Failed" Oct 23 00:50:01.345: INFO: Pod "busybox-user-65534-9ae39bf4-acfa-4352-a9ab-49ea9098b69c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744576ms Oct 23 00:50:03.349: INFO: Pod "busybox-user-65534-9ae39bf4-acfa-4352-a9ab-49ea9098b69c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006313704s Oct 23 00:50:05.352: INFO: Pod "busybox-user-65534-9ae39bf4-acfa-4352-a9ab-49ea9098b69c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009507353s Oct 23 00:50:07.355: INFO: Pod "busybox-user-65534-9ae39bf4-acfa-4352-a9ab-49ea9098b69c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013002048s Oct 23 00:50:09.358: INFO: Pod "busybox-user-65534-9ae39bf4-acfa-4352-a9ab-49ea9098b69c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015859204s Oct 23 00:50:09.358: INFO: Pod "busybox-user-65534-9ae39bf4-acfa-4352-a9ab-49ea9098b69c" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:09.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-346" for this suite. • [SLOW TEST:8.055 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":144,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:57.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:49:57.663: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:49:59.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:50:01.677: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:50:03.677: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770546997, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:50:06.685: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:50:06.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7707-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:14.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7865" for this suite. STEP: Destroying namespace "webhook-7865-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.425 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":19,"skipped":261,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:07.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 23 00:50:08.158: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:50:08.170: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:50:10.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547008, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547008, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547008, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547008, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:50:12.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547008, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547008, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547008, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547008, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:50:15.188: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:15.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7899" for this suite. STEP: Destroying namespace "webhook-7899-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.418 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":36,"skipped":617,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:09.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4c754961-41dd-428e-84e4-7330614c745a STEP: Creating the pod Oct 23 00:50:09.401: INFO: The status of Pod pod-projected-configmaps-e894beb3-217e-4c99-8f0c-d3e5aaaff5e7 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:50:11.405: INFO: The status of Pod pod-projected-configmaps-e894beb3-217e-4c99-8f0c-d3e5aaaff5e7 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:50:13.407: INFO: The status of Pod pod-projected-configmaps-e894beb3-217e-4c99-8f0c-d3e5aaaff5e7 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-4c754961-41dd-428e-84e4-7330614c745a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:15.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7067" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":319,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:15.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:15.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":15,"skipped":323,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:09.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2712 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2712;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2712 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2712;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2712.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2712.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2712.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2712.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2712.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2712.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2712.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2712.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2712.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2712.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2712.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2712.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2712.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.52.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.52.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.52.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.52.251_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2712 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2712;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2712 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2712;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2712.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2712.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2712.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2712.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2712.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2712.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2712.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2712.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2712.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2712.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2712.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2712.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2712.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.52.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.52.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.52.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.52.251_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:50:15.467: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.469: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.471: INFO: Unable to read wheezy_udp@dns-test-service.dns-2712 from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.474: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2712 from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.476: INFO: Unable to read wheezy_udp@dns-test-service.dns-2712.svc from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.478: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2712.svc from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.481: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2712.svc from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.482: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2712.svc from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.500: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.503: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.505: INFO: Unable to read jessie_udp@dns-test-service.dns-2712 from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.507: INFO: Unable to read jessie_tcp@dns-test-service.dns-2712 from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.509: INFO: Unable to read jessie_udp@dns-test-service.dns-2712.svc from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-2712.svc from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.513: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2712.svc from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2712.svc from pod dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2: the server could not find the requested resource (get pods dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2) Oct 23 00:50:15.529: INFO: Lookups using dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2712 wheezy_tcp@dns-test-service.dns-2712 wheezy_udp@dns-test-service.dns-2712.svc wheezy_tcp@dns-test-service.dns-2712.svc wheezy_udp@_http._tcp.dns-test-service.dns-2712.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2712.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2712 jessie_tcp@dns-test-service.dns-2712 jessie_udp@dns-test-service.dns-2712.svc jessie_tcp@dns-test-service.dns-2712.svc jessie_udp@_http._tcp.dns-test-service.dns-2712.svc jessie_tcp@_http._tcp.dns-test-service.dns-2712.svc] Oct 23 00:50:20.597: INFO: DNS probes using dns-2712/dns-test-c74d0395-aec0-44c2-afb1-98f2aeb57de2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:20.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2712" for this suite. • [SLOW TEST:11.248 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:15.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-2921277e-15da-4202-a039-9c8279ab993f STEP: Creating a pod to test consume secrets Oct 23 00:50:15.337: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354" in namespace "projected-7019" to be "Succeeded or Failed" Oct 23 00:50:15.339: INFO: Pod "pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066175ms Oct 23 00:50:17.343: INFO: Pod "pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006056616s Oct 23 00:50:19.346: INFO: Pod "pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009197418s Oct 23 00:50:21.350: INFO: Pod "pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013287222s STEP: Saw pod success Oct 23 00:50:21.350: INFO: Pod "pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354" satisfied condition "Succeeded or Failed" Oct 23 00:50:21.352: INFO: Trying to get logs from node node2 pod pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354 container projected-secret-volume-test: STEP: delete the pod Oct 23 00:50:21.364: INFO: Waiting for pod pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354 to disappear Oct 23 00:50:21.366: INFO: Pod pod-projected-secrets-ae9e4795-1959-4015-a190-97e2dcaa2354 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:21.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7019" for this suite. • [SLOW TEST:6.073 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":634,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:39.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 23 00:49:43.268: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1171 PodName:var-expansion-2338606e-54e6-4a7c-aeb5-b1302ae12ca9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:43.268: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Oct 23 00:49:43.431: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1171 PodName:var-expansion-2338606e-54e6-4a7c-aeb5-b1302ae12ca9 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:49:43.431: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Oct 23 00:49:44.063: INFO: Successfully updated pod "var-expansion-2338606e-54e6-4a7c-aeb5-b1302ae12ca9" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 23 00:49:44.065: INFO: Deleting pod "var-expansion-2338606e-54e6-4a7c-aeb5-b1302ae12ca9" in namespace "var-expansion-1171" Oct 23 00:49:44.070: INFO: Wait up to 5m0s for pod "var-expansion-2338606e-54e6-4a7c-aeb5-b1302ae12ca9" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:24.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1171" for this suite. • [SLOW TEST:44.862 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":22,"skipped":371,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:09.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Oct 23 00:50:09.462: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:50:09.474: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:50:11.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547009, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547009, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547009, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547009, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:50:13.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547009, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547009, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547009, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547009, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:50:16.493: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 23 00:50:24.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-2458 attach --namespace=webhook-2458 to-be-attached-pod -i -c=container1' Oct 23 00:50:24.707: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:24.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2458" for this suite. STEP: Destroying namespace "webhook-2458-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.630 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":17,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:56.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-pbls STEP: Creating a pod to test atomic-volume-subpath Oct 23 00:49:56.907: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pbls" in namespace "subpath-8748" to be "Succeeded or Failed" Oct 23 00:49:56.909: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.367282ms Oct 23 00:49:58.913: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006090296s Oct 23 00:50:00.917: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009553936s Oct 23 00:50:02.921: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01375022s Oct 23 00:50:04.925: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 8.017868399s Oct 23 00:50:06.930: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 10.023088673s Oct 23 00:50:08.933: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 12.026174945s Oct 23 00:50:10.936: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 14.029158427s Oct 23 00:50:12.940: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 16.033155344s Oct 23 00:50:14.944: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 18.036525602s Oct 23 00:50:16.948: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 20.041340324s Oct 23 00:50:18.951: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 22.044386685s Oct 23 00:50:20.956: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 24.048409558s Oct 23 00:50:22.960: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Running", Reason="", readiness=true. Elapsed: 26.05241693s Oct 23 00:50:24.963: INFO: Pod "pod-subpath-test-configmap-pbls": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.056021176s STEP: Saw pod success Oct 23 00:50:24.963: INFO: Pod "pod-subpath-test-configmap-pbls" satisfied condition "Succeeded or Failed" Oct 23 00:50:24.965: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-pbls container test-container-subpath-configmap-pbls: STEP: delete the pod Oct 23 00:50:24.978: INFO: Waiting for pod pod-subpath-test-configmap-pbls to disappear Oct 23 00:50:24.980: INFO: Pod pod-subpath-test-configmap-pbls no longer exists STEP: Deleting pod pod-subpath-test-configmap-pbls Oct 23 00:50:24.980: INFO: Deleting pod "pod-subpath-test-configmap-pbls" in namespace "subpath-8748" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:24.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8748" for this suite. • [SLOW TEST:28.121 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":359,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:21.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Oct 23 00:50:21.424: INFO: Waiting up to 5m0s for pod "downward-api-38c31fda-6187-422c-9061-b892d326cc3f" in namespace "downward-api-8140" to be "Succeeded or Failed" Oct 23 00:50:21.426: INFO: Pod "downward-api-38c31fda-6187-422c-9061-b892d326cc3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128099ms Oct 23 00:50:23.431: INFO: Pod "downward-api-38c31fda-6187-422c-9061-b892d326cc3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006682999s Oct 23 00:50:25.435: INFO: Pod "downward-api-38c31fda-6187-422c-9061-b892d326cc3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010539287s Oct 23 00:50:27.441: INFO: Pod "downward-api-38c31fda-6187-422c-9061-b892d326cc3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016464391s STEP: Saw pod success Oct 23 00:50:27.441: INFO: Pod "downward-api-38c31fda-6187-422c-9061-b892d326cc3f" satisfied condition "Succeeded or Failed" Oct 23 00:50:27.443: INFO: Trying to get logs from node node2 pod downward-api-38c31fda-6187-422c-9061-b892d326cc3f container dapi-container: STEP: delete the pod Oct 23 00:50:27.460: INFO: Waiting for pod downward-api-38c31fda-6187-422c-9061-b892d326cc3f to disappear Oct 23 00:50:27.462: INFO: Pod downward-api-38c31fda-6187-422c-9061-b892d326cc3f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:27.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8140" for this suite. • [SLOW TEST:6.079 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:24.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:50:25.311: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:50:27.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547025, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547025, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547025, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547025, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:50:30.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:30.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-771" for this suite. STEP: Destroying namespace "webhook-771-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.686 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":18,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:15.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2897.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2897.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2897.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2897.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2897.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2897.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2897.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 222.51.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.51.222_udp@PTR;check="$$(dig +tcp +noall +answer +search 222.51.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.51.222_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2897.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2897.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2897.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2897.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2897.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2897.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2897.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2897.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2897.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 222.51.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.51.222_udp@PTR;check="$$(dig +tcp +noall +answer +search 222.51.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.51.222_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:50:25.592: INFO: Unable to read wheezy_udp@dns-test-service.dns-2897.svc.cluster.local from pod dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97: the server could not find the requested resource (get pods dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97) Oct 23 00:50:25.594: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2897.svc.cluster.local from pod dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97: the server could not find the requested resource (get pods dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97) Oct 23 00:50:25.598: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local from pod dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97: the server could not find the requested resource (get pods dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97) Oct 23 00:50:25.601: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local from pod dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97: the server could not find the requested resource (get pods dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97) Oct 23 00:50:25.637: INFO: Unable to read jessie_udp@dns-test-service.dns-2897.svc.cluster.local from pod dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97: the server could not find the requested resource (get pods dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97) Oct 23 00:50:25.640: INFO: Unable to read jessie_tcp@dns-test-service.dns-2897.svc.cluster.local from pod dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97: the server could not find the requested resource (get pods dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97) Oct 23 00:50:25.642: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local from pod dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97: the server could not find the requested resource (get pods dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97) Oct 23 00:50:25.644: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local from pod dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97: the server could not find the requested resource (get pods dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97) Oct 23 00:50:25.658: INFO: Lookups using dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97 failed for: [wheezy_udp@dns-test-service.dns-2897.svc.cluster.local wheezy_tcp@dns-test-service.dns-2897.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local jessie_udp@dns-test-service.dns-2897.svc.cluster.local jessie_tcp@dns-test-service.dns-2897.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2897.svc.cluster.local] Oct 23 00:50:30.702: INFO: DNS probes using dns-2897/dns-test-437abd0e-de01-497b-af2f-72f5a6fa2d97 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:30.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2897" for this suite. • [SLOW TEST:15.190 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":16,"skipped":342,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} S ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:30.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:30.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6911" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":17,"skipped":343,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:14.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Oct 23 00:50:14.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1084 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Oct 23 00:50:15.015: INFO: stderr: "" Oct 23 00:50:15.015: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Oct 23 00:50:15.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1084 delete pods e2e-test-httpd-pod' Oct 23 00:50:33.848: INFO: stderr: "" Oct 23 00:50:33.848: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:33.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1084" for this suite. • [SLOW TEST:19.027 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":20,"skipped":263,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:20.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4991.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4991.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4991.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4991.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4991.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4991.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:50:28.784: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local from pod dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3: the server could not find the requested resource (get pods dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3) Oct 23 00:50:28.787: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local from pod dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3: the server could not find the requested resource (get pods dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3) Oct 23 00:50:28.789: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4991.svc.cluster.local from pod dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3: the server could not find the requested resource (get pods dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3) Oct 23 00:50:28.791: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4991.svc.cluster.local from pod dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3: the server could not find the requested resource (get pods dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3) Oct 23 00:50:28.800: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local from pod dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3: the server could not find the requested resource (get pods dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3) Oct 23 00:50:28.803: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local from pod dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3: the server could not find the requested resource (get pods dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3) Oct 23 00:50:28.807: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4991.svc.cluster.local from pod dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3: the server could not find the requested resource (get pods dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3) Oct 23 00:50:28.810: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4991.svc.cluster.local from pod dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3: the server could not find the requested resource (get pods dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3) Oct 23 00:50:28.815: INFO: Lookups using dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4991.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4991.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4991.svc.cluster.local jessie_udp@dns-test-service-2.dns-4991.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4991.svc.cluster.local] Oct 23 00:50:33.852: INFO: DNS probes using dns-4991/dns-test-9dc905a3-33b8-4d74-882e-77a6b66d20b3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:33.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4991" for this suite. • [SLOW TEST:13.208 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":17,"skipped":168,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:33.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-4eaf36cd-39b8-445a-80d0-318cd910e3dc STEP: Creating a pod to test consume secrets Oct 23 00:50:33.961: INFO: Waiting up to 5m0s for pod "pod-secrets-3defe717-bc21-411a-98e1-f1a51c10c725" in namespace "secrets-6068" to be "Succeeded or Failed" Oct 23 00:50:33.963: INFO: Pod "pod-secrets-3defe717-bc21-411a-98e1-f1a51c10c725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.726277ms Oct 23 00:50:35.967: INFO: Pod "pod-secrets-3defe717-bc21-411a-98e1-f1a51c10c725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006154095s Oct 23 00:50:37.971: INFO: Pod "pod-secrets-3defe717-bc21-411a-98e1-f1a51c10c725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009962103s STEP: Saw pod success Oct 23 00:50:37.971: INFO: Pod "pod-secrets-3defe717-bc21-411a-98e1-f1a51c10c725" satisfied condition "Succeeded or Failed" Oct 23 00:50:37.973: INFO: Trying to get logs from node node1 pod pod-secrets-3defe717-bc21-411a-98e1-f1a51c10c725 container secret-volume-test: STEP: delete the pod Oct 23 00:50:38.071: INFO: Waiting for pod pod-secrets-3defe717-bc21-411a-98e1-f1a51c10c725 to disappear Oct 23 00:50:38.073: INFO: Pod pod-secrets-3defe717-bc21-411a-98e1-f1a51c10c725 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:38.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6068" for this suite. STEP: Destroying namespace "secret-namespace-7403" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":174,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:33.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Oct 23 00:50:33.932: INFO: Waiting up to 5m0s for pod "var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27" in namespace "var-expansion-8947" to be "Succeeded or Failed" Oct 23 00:50:33.937: INFO: Pod "var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27474ms Oct 23 00:50:35.940: INFO: Pod "var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007445401s Oct 23 00:50:37.944: INFO: Pod "var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011327773s Oct 23 00:50:39.947: INFO: Pod "var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014943533s STEP: Saw pod success Oct 23 00:50:39.947: INFO: Pod "var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27" satisfied condition "Succeeded or Failed" Oct 23 00:50:39.949: INFO: Trying to get logs from node node2 pod var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27 container dapi-container: STEP: delete the pod Oct 23 00:50:39.961: INFO: Waiting for pod var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27 to disappear Oct 23 00:50:39.963: INFO: Pod var-expansion-a11d0b51-c8e9-4c15-b230-3683d3318b27 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:39.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8947" for this suite. • [SLOW TEST:6.073 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":280,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:38.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Oct 23 00:50:38.128: INFO: Waiting up to 5m0s for pod "client-containers-6c23eeeb-fe4f-4ed7-8c8a-0d9458b110fe" in namespace "containers-6680" to be "Succeeded or Failed" Oct 23 00:50:38.131: INFO: Pod "client-containers-6c23eeeb-fe4f-4ed7-8c8a-0d9458b110fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.583854ms Oct 23 00:50:40.134: INFO: Pod "client-containers-6c23eeeb-fe4f-4ed7-8c8a-0d9458b110fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005769942s Oct 23 00:50:42.138: INFO: Pod "client-containers-6c23eeeb-fe4f-4ed7-8c8a-0d9458b110fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01045474s STEP: Saw pod success Oct 23 00:50:42.139: INFO: Pod "client-containers-6c23eeeb-fe4f-4ed7-8c8a-0d9458b110fe" satisfied condition "Succeeded or Failed" Oct 23 00:50:42.141: INFO: Trying to get logs from node node2 pod client-containers-6c23eeeb-fe4f-4ed7-8c8a-0d9458b110fe container agnhost-container: STEP: delete the pod Oct 23 00:50:42.154: INFO: Waiting for pod client-containers-6c23eeeb-fe4f-4ed7-8c8a-0d9458b110fe to disappear Oct 23 00:50:42.156: INFO: Pod client-containers-6c23eeeb-fe4f-4ed7-8c8a-0d9458b110fe no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:42.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6680" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":175,"failed":0} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:30.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Oct 23 00:50:30.860: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 23 00:50:30.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 create -f -' Oct 23 00:50:31.242: INFO: stderr: "" Oct 23 00:50:31.242: INFO: stdout: "service/agnhost-replica created\n" Oct 23 00:50:31.243: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 23 00:50:31.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 create -f -' Oct 23 00:50:31.560: INFO: stderr: "" Oct 23 00:50:31.560: INFO: stdout: "service/agnhost-primary created\n" Oct 23 00:50:31.561: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 23 00:50:31.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 create -f -' Oct 23 00:50:31.875: INFO: stderr: "" Oct 23 00:50:31.875: INFO: stdout: "service/frontend created\n" Oct 23 00:50:31.875: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 23 00:50:31.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 create -f -' Oct 23 00:50:32.208: INFO: stderr: "" Oct 23 00:50:32.208: INFO: stdout: "deployment.apps/frontend created\n" Oct 23 00:50:32.208: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 23 00:50:32.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 create -f -' Oct 23 00:50:32.512: INFO: stderr: "" Oct 23 00:50:32.512: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 23 00:50:32.513: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 23 00:50:32.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 create -f -' Oct 23 00:50:32.833: INFO: stderr: "" Oct 23 00:50:32.833: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 23 00:50:32.833: INFO: Waiting for all frontend pods to be Running. Oct 23 00:50:42.885: INFO: Waiting for frontend to serve content. Oct 23 00:50:42.895: INFO: Trying to add a new entry to the guestbook. Oct 23 00:50:42.903: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 23 00:50:42.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 delete --grace-period=0 --force -f -' Oct 23 00:50:43.044: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:50:43.044: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 23 00:50:43.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 delete --grace-period=0 --force -f -' Oct 23 00:50:43.183: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:50:43.183: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 23 00:50:43.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 delete --grace-period=0 --force -f -' Oct 23 00:50:43.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:50:43.327: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 23 00:50:43.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 delete --grace-period=0 --force -f -' Oct 23 00:50:43.467: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:50:43.467: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 23 00:50:43.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 delete --grace-period=0 --force -f -' Oct 23 00:50:43.601: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:50:43.601: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 23 00:50:43.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8150 delete --grace-period=0 --force -f -' Oct 23 00:50:43.718: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:50:43.718: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:43.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8150" for this suite. • [SLOW TEST:12.887 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":18,"skipped":381,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:40.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 23 00:50:40.036: INFO: Waiting up to 5m0s for pod "pod-1ba3862d-6340-4ba6-8f8f-a4869b2488f8" in namespace "emptydir-1699" to be "Succeeded or Failed" Oct 23 00:50:40.038: INFO: Pod "pod-1ba3862d-6340-4ba6-8f8f-a4869b2488f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371426ms Oct 23 00:50:42.042: INFO: Pod "pod-1ba3862d-6340-4ba6-8f8f-a4869b2488f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006097161s Oct 23 00:50:44.045: INFO: Pod "pod-1ba3862d-6340-4ba6-8f8f-a4869b2488f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008859532s STEP: Saw pod success Oct 23 00:50:44.045: INFO: Pod "pod-1ba3862d-6340-4ba6-8f8f-a4869b2488f8" satisfied condition "Succeeded or Failed" Oct 23 00:50:44.048: INFO: Trying to get logs from node node1 pod pod-1ba3862d-6340-4ba6-8f8f-a4869b2488f8 container test-container: STEP: delete the pod Oct 23 00:50:44.132: INFO: Waiting for pod pod-1ba3862d-6340-4ba6-8f8f-a4869b2488f8 to disappear Oct 23 00:50:44.134: INFO: Pod pod-1ba3862d-6340-4ba6-8f8f-a4869b2488f8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:44.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1699" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":293,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:43.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:50:43.850: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 23 00:50:45.877: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:46.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5747" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":19,"skipped":431,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:46.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Oct 23 00:50:46.984: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4790 proxy --unix-socket=/tmp/kubectl-proxy-unix481232132/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:47.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4790" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":20,"skipped":463,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:44.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 23 00:50:44.190: INFO: Waiting up to 5m0s for pod "pod-6b5391ad-5f8b-42ef-a754-8a999c0ac5f3" in namespace "emptydir-7201" to be "Succeeded or Failed" Oct 23 00:50:44.192: INFO: Pod "pod-6b5391ad-5f8b-42ef-a754-8a999c0ac5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 1.900579ms Oct 23 00:50:46.195: INFO: Pod "pod-6b5391ad-5f8b-42ef-a754-8a999c0ac5f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004771286s Oct 23 00:50:48.199: INFO: Pod "pod-6b5391ad-5f8b-42ef-a754-8a999c0ac5f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008560918s STEP: Saw pod success Oct 23 00:50:48.199: INFO: Pod "pod-6b5391ad-5f8b-42ef-a754-8a999c0ac5f3" satisfied condition "Succeeded or Failed" Oct 23 00:50:48.201: INFO: Trying to get logs from node node1 pod pod-6b5391ad-5f8b-42ef-a754-8a999c0ac5f3 container test-container: STEP: delete the pod Oct 23 00:50:48.231: INFO: Waiting for pod pod-6b5391ad-5f8b-42ef-a754-8a999c0ac5f3 to disappear Oct 23 00:50:48.233: INFO: Pod pod-6b5391ad-5f8b-42ef-a754-8a999c0ac5f3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:48.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7201" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":298,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:42.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9108.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9108.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9108.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9108.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9108.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9108.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:50:48.252: INFO: DNS probes using dns-9108/dns-test-af2499e9-7f23-441d-bb7a-2526f781f8ba succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:48.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9108" for this suite. • [SLOW TEST:6.098 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:47.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 23 00:50:53.717: INFO: Successfully updated pod "adopt-release-2klcw" STEP: Checking that the Job readopts the Pod Oct 23 00:50:53.717: INFO: Waiting up to 15m0s for pod "adopt-release-2klcw" in namespace "job-3944" to be "adopted" Oct 23 00:50:53.722: INFO: Pod "adopt-release-2klcw": Phase="Running", Reason="", readiness=true. Elapsed: 5.253641ms Oct 23 00:50:55.727: INFO: Pod "adopt-release-2klcw": Phase="Running", Reason="", readiness=true. Elapsed: 2.009541927s Oct 23 00:50:55.727: INFO: Pod "adopt-release-2klcw" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 23 00:50:56.236: INFO: Successfully updated pod "adopt-release-2klcw" STEP: Checking that the Job releases the Pod Oct 23 00:50:56.236: INFO: Waiting up to 15m0s for pod "adopt-release-2klcw" in namespace "job-3944" to be "released" Oct 23 00:50:56.238: INFO: Pod "adopt-release-2klcw": Phase="Running", Reason="", readiness=true. Elapsed: 2.188704ms Oct 23 00:50:58.242: INFO: Pod "adopt-release-2klcw": Phase="Running", Reason="", readiness=true. Elapsed: 2.005480895s Oct 23 00:50:58.242: INFO: Pod "adopt-release-2klcw" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:58.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3944" for this suite. • [SLOW TEST:11.081 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":21,"skipped":515,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:48.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Oct 23 00:50:48.382: INFO: The status of Pod annotationupdate72a0ab84-4012-4afb-8b3c-9871c2dc779f is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:50:50.385: INFO: The status of Pod annotationupdate72a0ab84-4012-4afb-8b3c-9871c2dc779f is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:50:52.387: INFO: The status of Pod annotationupdate72a0ab84-4012-4afb-8b3c-9871c2dc779f is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:50:54.386: INFO: The status of Pod annotationupdate72a0ab84-4012-4afb-8b3c-9871c2dc779f is Running (Ready = true) Oct 23 00:50:54.907: INFO: Successfully updated pod "annotationupdate72a0ab84-4012-4afb-8b3c-9871c2dc779f" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:58.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-824" for this suite. • [SLOW TEST:10.604 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:59.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-3f23a986-79bb-474f-b7c2-27e7f7d4edc5 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:50:59.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1124" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":22,"skipped":251,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:30.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-e9bcedec-4610-41c2-91aa-e15b5e2e736e in namespace container-probe-1879 Oct 23 00:50:34.560: INFO: Started pod liveness-e9bcedec-4610-41c2-91aa-e15b5e2e736e in namespace container-probe-1879 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 00:50:34.563: INFO: Initial restart count of pod liveness-e9bcedec-4610-41c2-91aa-e15b5e2e736e is 0 Oct 23 00:51:00.621: INFO: Restart count of pod container-probe-1879/liveness-e9bcedec-4610-41c2-91aa-e15b5e2e736e is now 1 (26.057726228s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:00.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1879" for this suite. • [SLOW TEST:30.115 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":350,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:58.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:50:58.319: INFO: Creating simple deployment test-new-deployment Oct 23 00:50:58.327: INFO: deployment "test-new-deployment" doesn't have the required revision set Oct 23 00:51:00.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547058, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547058, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547058, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Oct 23 00:51:02.353: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-8885 1166c699-8096-445f-9db7-59e21f18ba29 72211 3 2021-10-23 00:50:58 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-10-23 00:50:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-10-23 00:51:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0046f7228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-10-23 00:51:01 +0000 UTC,LastTransitionTime:2021-10-23 00:51:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-10-23 00:51:01 +0000 UTC,LastTransitionTime:2021-10-23 00:50:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 23 00:51:02.355: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-8885 805753be-6c7c-4a47-9fbd-c91cfee12ab3 72212 2 2021-10-23 00:50:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 1166c699-8096-445f-9db7-59e21f18ba29 0xc0046f7617 0xc0046f7618}] [] [{kube-controller-manager Update apps/v1 2021-10-23 00:51:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1166c699-8096-445f-9db7-59e21f18ba29\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0046f7688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 23 00:51:02.358: INFO: Pod "test-new-deployment-847dcfb7fb-bxkr7" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-bxkr7 test-new-deployment-847dcfb7fb- deployment-8885 883032f1-4b47-4356-8257-cc0029c77667 72187 0 2021-10-23 00:50:58 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.139" ], "mac": "16:8a:e7:0c:23:47", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.139" ], "mac": "16:8a:e7:0c:23:47", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 805753be-6c7c-4a47-9fbd-c91cfee12ab3 0xc0046f7a1f 0xc0046f7a30}] [] [{kube-controller-manager Update v1 2021-10-23 00:50:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"805753be-6c7c-4a47-9fbd-c91cfee12ab3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-10-23 00:51:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-10-23 00:51:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mgv62,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgv62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:50:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:51:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:51:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-23 00:50:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.139,StartTime:2021-10-23 00:50:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-10-23 00:51:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://b232d6916d5c74674eb3f95d8c826ae6b0277f92754e2cc25c931f60072f48cb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 23 00:51:02.358: INFO: Pod "test-new-deployment-847dcfb7fb-f7kll" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-f7kll test-new-deployment-847dcfb7fb- deployment-8885 b557b6a6-5504-4edd-bd41-7827aeb8eb57 72214 0 2021-10-23 00:51:02 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 805753be-6c7c-4a47-9fbd-c91cfee12ab3 0xc0046f7c1f 0xc0046f7c30}] [] [{kube-controller-manager Update v1 2021-10-23 00:51:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"805753be-6c7c-4a47-9fbd-c91cfee12ab3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8dcn4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dcn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:02.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8885" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":22,"skipped":541,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:40.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3134 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3134 I1023 00:48:40.338253 35 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3134, replica count: 2 I1023 00:48:43.389448 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:46.390626 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:49.391666 35 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:48:49.391: INFO: Creating new exec pod Oct 23 00:48:56.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Oct 23 00:48:56.994: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Oct 23 00:48:56.994: INFO: stdout: "externalname-service-szgbk" Oct 23 00:48:56.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.55.26 80' Oct 23 00:48:57.247: INFO: stderr: "+ nc -v -t -w 2 10.233.55.26 80\nConnection to 10.233.55.26 80 port [tcp/http] succeeded!\n+ echo hostName\n" Oct 23 00:48:57.247: INFO: stdout: "externalname-service-szgbk" Oct 23 00:48:57.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:48:57.494: INFO: rc: 1 Oct 23 00:48:57.494: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:58.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:48:58.756: INFO: rc: 1 Oct 23 00:48:58.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:59.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:48:59.898: INFO: rc: 1 Oct 23 00:48:59.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:00.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:00.851: INFO: rc: 1 Oct 23 00:49:00.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:01.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:02.122: INFO: rc: 1 Oct 23 00:49:02.122: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:02.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:03.129: INFO: rc: 1 Oct 23 00:49:03.129: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:03.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:03.750: INFO: rc: 1 Oct 23 00:49:03.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:04.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:04.728: INFO: rc: 1 Oct 23 00:49:04.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:05.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:05.746: INFO: rc: 1 Oct 23 00:49:05.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:06.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:06.766: INFO: rc: 1 Oct 23 00:49:06.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:07.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:08.015: INFO: rc: 1 Oct 23 00:49:08.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:08.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:08.787: INFO: rc: 1 Oct 23 00:49:08.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:09.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:09.748: INFO: rc: 1 Oct 23 00:49:09.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:10.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:10.765: INFO: rc: 1 Oct 23 00:49:10.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:11.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:12.107: INFO: rc: 1 Oct 23 00:49:12.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:12.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:12.885: INFO: rc: 1 Oct 23 00:49:12.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:13.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:13.786: INFO: rc: 1 Oct 23 00:49:13.786: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:14.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:14.935: INFO: rc: 1 Oct 23 00:49:14.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:15.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:15.740: INFO: rc: 1 Oct 23 00:49:15.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:16.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:16.721: INFO: rc: 1 Oct 23 00:49:16.721: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31722 nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:17.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:17.739: INFO: rc: 1 Oct 23 00:49:17.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:18.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:19.118: INFO: rc: 1 Oct 23 00:49:19.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:19.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:19.930: INFO: rc: 1 Oct 23 00:49:19.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:20.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:20.857: INFO: rc: 1 Oct 23 00:49:20.857: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:21.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:21.762: INFO: rc: 1 Oct 23 00:49:21.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:22.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:22.893: INFO: rc: 1 Oct 23 00:49:22.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:23.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:23.859: INFO: rc: 1 Oct 23 00:49:23.859: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:24.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:24.732: INFO: rc: 1 Oct 23 00:49:24.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:25.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:25.745: INFO: rc: 1 Oct 23 00:49:25.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:26.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:26.744: INFO: rc: 1 Oct 23 00:49:26.744: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:27.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:27.831: INFO: rc: 1 Oct 23 00:49:27.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:28.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:28.742: INFO: rc: 1 Oct 23 00:49:28.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:29.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:29.980: INFO: rc: 1 Oct 23 00:49:29.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:30.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:31.243: INFO: rc: 1 Oct 23 00:49:31.243: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:31.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:31.997: INFO: rc: 1 Oct 23 00:49:31.997: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:32.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:32.973: INFO: rc: 1 Oct 23 00:49:32.973: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:33.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:33.954: INFO: rc: 1 Oct 23 00:49:33.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:34.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:34.852: INFO: rc: 1 Oct 23 00:49:34.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:35.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:35.749: INFO: rc: 1 Oct 23 00:49:35.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:36.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:37.199: INFO: rc: 1 Oct 23 00:49:37.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:37.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:37.753: INFO: rc: 1 Oct 23 00:49:37.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:38.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:38.758: INFO: rc: 1 Oct 23 00:49:38.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:39.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:39.925: INFO: rc: 1 Oct 23 00:49:39.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:40.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:40.745: INFO: rc: 1 Oct 23 00:49:40.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:41.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:41.760: INFO: rc: 1 Oct 23 00:49:41.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:42.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:42.739: INFO: rc: 1 Oct 23 00:49:42.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:43.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:43.790: INFO: rc: 1 Oct 23 00:49:43.790: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:44.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:44.814: INFO: rc: 1 Oct 23 00:49:44.814: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:45.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:45.877: INFO: rc: 1 Oct 23 00:49:45.877: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:46.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:46.756: INFO: rc: 1 Oct 23 00:49:46.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:47.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:47.740: INFO: rc: 1 Oct 23 00:49:47.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:48.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:49.856: INFO: rc: 1 Oct 23 00:49:49.856: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:50.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:50.737: INFO: rc: 1 Oct 23 00:49:50.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:51.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:51.751: INFO: rc: 1 Oct 23 00:49:51.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:52.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:52.769: INFO: rc: 1 Oct 23 00:49:52.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:53.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:53.756: INFO: rc: 1 Oct 23 00:49:53.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:54.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:54.758: INFO: rc: 1 Oct 23 00:49:54.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:55.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:56.141: INFO: rc: 1 Oct 23 00:49:56.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:56.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:57.115: INFO: rc: 1 Oct 23 00:49:57.115: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:57.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:58.230: INFO: rc: 1 Oct 23 00:49:58.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:58.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:58.816: INFO: rc: 1 Oct 23 00:49:58.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:59.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:49:59.918: INFO: rc: 1 Oct 23 00:49:59.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:00.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:00.793: INFO: rc: 1 Oct 23 00:50:00.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:01.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:01.842: INFO: rc: 1 Oct 23 00:50:01.842: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:02.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:03.013: INFO: rc: 1 Oct 23 00:50:03.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:03.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:03.818: INFO: rc: 1 Oct 23 00:50:03.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:04.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:04.868: INFO: rc: 1 Oct 23 00:50:04.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:05.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:06.147: INFO: rc: 1 Oct 23 00:50:06.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:06.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:07.113: INFO: rc: 1 Oct 23 00:50:07.113: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:07.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:07.836: INFO: rc: 1 Oct 23 00:50:07.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:08.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:08.734: INFO: rc: 1 Oct 23 00:50:08.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:09.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:09.762: INFO: rc: 1 Oct 23 00:50:09.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:10.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:10.823: INFO: rc: 1 Oct 23 00:50:10.823: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:11.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:12.039: INFO: rc: 1 Oct 23 00:50:12.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:12.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:12.796: INFO: rc: 1 Oct 23 00:50:12.796: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:13.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:13.750: INFO: rc: 1 Oct 23 00:50:13.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31722 nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:14.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:14.757: INFO: rc: 1 Oct 23 00:50:14.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:15.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:15.736: INFO: rc: 1 Oct 23 00:50:15.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:16.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:16.745: INFO: rc: 1 Oct 23 00:50:16.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:17.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:18.147: INFO: rc: 1 Oct 23 00:50:18.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:18.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:19.864: INFO: rc: 1 Oct 23 00:50:19.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:20.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:21.069: INFO: rc: 1 Oct 23 00:50:21.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:21.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:21.809: INFO: rc: 1 Oct 23 00:50:21.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:22.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:23.125: INFO: rc: 1 Oct 23 00:50:23.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:23.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:24.005: INFO: rc: 1 Oct 23 00:50:24.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:24.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:24.870: INFO: rc: 1 Oct 23 00:50:24.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:25.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:25.746: INFO: rc: 1 Oct 23 00:50:25.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:26.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:26.814: INFO: rc: 1 Oct 23 00:50:26.814: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:27.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:27.751: INFO: rc: 1 Oct 23 00:50:27.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:28.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:29.151: INFO: rc: 1 Oct 23 00:50:29.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:29.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:29.728: INFO: rc: 1 Oct 23 00:50:29.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:30.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:30.766: INFO: rc: 1 Oct 23 00:50:30.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:31.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:31.728: INFO: rc: 1 Oct 23 00:50:31.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:32.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:32.894: INFO: rc: 1 Oct 23 00:50:32.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:33.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:34.236: INFO: rc: 1 Oct 23 00:50:34.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:34.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:35.168: INFO: rc: 1 Oct 23 00:50:35.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:35.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:35.905: INFO: rc: 1 Oct 23 00:50:35.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:36.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:36.982: INFO: rc: 1 Oct 23 00:50:36.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:37.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:37.746: INFO: rc: 1 Oct 23 00:50:37.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:38.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:38.873: INFO: rc: 1 Oct 23 00:50:38.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:39.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:39.932: INFO: rc: 1 Oct 23 00:50:39.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:40.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:40.752: INFO: rc: 1 Oct 23 00:50:40.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:41.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:41.883: INFO: rc: 1 Oct 23 00:50:41.883: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:42.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:42.906: INFO: rc: 1 Oct 23 00:50:42.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:43.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:43.826: INFO: rc: 1 Oct 23 00:50:43.827: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:44.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:44.951: INFO: rc: 1 Oct 23 00:50:44.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:45.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:45.905: INFO: rc: 1 Oct 23 00:50:45.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:46.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:46.944: INFO: rc: 1 Oct 23 00:50:46.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:47.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:47.938: INFO: rc: 1 Oct 23 00:50:47.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:48.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:50.093: INFO: rc: 1 Oct 23 00:50:50.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:50.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:50.799: INFO: rc: 1 Oct 23 00:50:50.800: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:51.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:51.991: INFO: rc: 1 Oct 23 00:50:51.991: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:52.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:52.792: INFO: rc: 1 Oct 23 00:50:52.792: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31722 nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:53.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:53.747: INFO: rc: 1 Oct 23 00:50:53.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:54.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:54.741: INFO: rc: 1 Oct 23 00:50:54.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:55.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:55.748: INFO: rc: 1 Oct 23 00:50:55.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:56.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:56.741: INFO: rc: 1 Oct 23 00:50:56.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:57.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:57.729: INFO: rc: 1 Oct 23 00:50:57.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:57.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722' Oct 23 00:50:57.964: INFO: rc: 1 Oct 23 00:50:57.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3134 exec execpod2728q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31722: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31722 + echo hostName nc: connect to 10.10.190.207 port 31722 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:57.965: FAIL: Unexpected error: <*errors.errorString | 0xc0025bb330>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31722 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31722 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001d01800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001d01800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001d01800, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 00:50:57.966: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3134". STEP: Found 17 events. Oct 23 00:50:58.000: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod2728q: { } Scheduled: Successfully assigned services-3134/execpod2728q to node2 Oct 23 00:50:58.000: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-9rwls: { } Scheduled: Successfully assigned services-3134/externalname-service-9rwls to node1 Oct 23 00:50:58.000: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-szgbk: { } Scheduled: Successfully assigned services-3134/externalname-service-szgbk to node2 Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:40 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-9rwls Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:40 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-szgbk Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:43 +0000 UTC - event for externalname-service-szgbk: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:43 +0000 UTC - event for externalname-service-szgbk: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 461.567151ms Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:43 +0000 UTC - event for externalname-service-szgbk: {kubelet node2} Created: Created container externalname-service Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:44 +0000 UTC - event for externalname-service-szgbk: {kubelet node2} Started: Started container externalname-service Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:46 +0000 UTC - event for externalname-service-9rwls: {kubelet node1} Created: Created container externalname-service Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:46 +0000 UTC - event for externalname-service-9rwls: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:46 +0000 UTC - event for externalname-service-9rwls: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 397.571876ms Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:47 +0000 UTC - event for externalname-service-9rwls: {kubelet node1} Started: Started container externalname-service Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:52 +0000 UTC - event for execpod2728q: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:50:58.000: INFO: At 2021-10-23 00:48:53 +0000 UTC - event for execpod2728q: {kubelet node2} Started: Started container agnhost-container Oct 23 00:50:58.001: INFO: At 2021-10-23 00:48:53 +0000 UTC - event for execpod2728q: {kubelet node2} Created: Created container agnhost-container Oct 23 00:50:58.001: INFO: At 2021-10-23 00:48:53 +0000 UTC - event for execpod2728q: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 322.58908ms Oct 23 00:50:58.003: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:50:58.003: INFO: execpod2728q node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:49 +0000 UTC }] Oct 23 00:50:58.003: INFO: externalname-service-9rwls node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:40 +0000 UTC }] Oct 23 00:50:58.003: INFO: externalname-service-szgbk node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:48:40 +0000 UTC }] Oct 23 00:50:58.003: INFO: Oct 23 00:50:58.007: INFO: Logging node info for node master1 Oct 23 00:50:58.009: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 72066 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:54 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:54 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:54 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:50:54 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:50:58.010: INFO: Logging kubelet events for node master1 Oct 23 00:50:58.012: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 00:50:58.021: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 00:50:58.021: INFO: Container docker-registry ready: true, restart count 0 Oct 23 00:50:58.021: INFO: Container nginx ready: true, restart count 0 Oct 23 00:50:58.021: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:50:58.021: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:50:58.021: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:50:58.021: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.021: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:50:58.021: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.021: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 00:50:58.021: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.021: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:50:58.021: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:50:58.021: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:50:58.021: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:50:58.021: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.021: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:50:58.021: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.021: INFO: Container coredns ready: true, restart count 2 Oct 23 00:50:58.021: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.021: INFO: Container kube-scheduler ready: true, restart count 0 W1023 00:50:58.032899 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:50:58.100: INFO: Latency metrics for node master1 Oct 23 00:50:58.100: INFO: Logging node info for node master2 Oct 23 00:50:58.103: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 71950 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:50 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:50 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:50 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:50:50 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:50:58.103: INFO: Logging kubelet events for node master2 Oct 23 00:50:58.106: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 00:50:58.114: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.114: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:50:58.114: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.114: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:50:58.114: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.114: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:50:58.114: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:50:58.114: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:50:58.114: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:50:58.114: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.115: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:50:58.115: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.115: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:50:58.115: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.115: INFO: Container autoscaler ready: true, restart count 1 Oct 23 00:50:58.115: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:50:58.115: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:50:58.115: INFO: Container node-exporter ready: true, restart count 0 W1023 00:50:58.127477 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:50:58.193: INFO: Latency metrics for node master2 Oct 23 00:50:58.193: INFO: Logging node info for node master3 Oct 23 00:50:58.195: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 72059 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:54 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:54 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:54 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:50:54 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:50:58.196: INFO: Logging kubelet events for node master3 Oct 23 00:50:58.198: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 00:50:58.206: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.206: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:50:58.206: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.206: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:50:58.206: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.206: INFO: Container coredns ready: true, restart count 2 Oct 23 00:50:58.206: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.206: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:50:58.206: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.206: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:50:58.206: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.206: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:50:58.206: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:50:58.206: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:50:58.206: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:50:58.206: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.206: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 00:50:58.206: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:50:58.206: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:50:58.206: INFO: Container node-exporter ready: true, restart count 0 W1023 00:50:58.221365 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:50:58.284: INFO: Latency metrics for node master3 Oct 23 00:50:58.284: INFO: Logging node info for node node1 Oct 23 00:50:58.287: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 71934 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:17:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:49 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:49 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:49 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:50:49 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:50:58.288: INFO: Logging kubelet events for node node1 Oct 23 00:50:58.289: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 00:50:58.305: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 00:50:58.306: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 00:50:58.306: INFO: Container config-reloader ready: true, restart count 0 Oct 23 00:50:58.306: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 00:50:58.306: INFO: Container grafana ready: true, restart count 0 Oct 23 00:50:58.306: INFO: Container prometheus ready: true, restart count 1 Oct 23 00:50:58.306: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:50:58.306: INFO: Container collectd ready: true, restart count 0 Oct 23 00:50:58.306: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:50:58.306: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:50:58.306: INFO: externalname-service-9rwls started at 2021-10-23 00:48:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container externalname-service ready: true, restart count 0 Oct 23 00:50:58.306: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 00:50:58.306: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:50:58.306: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 00:50:58.306: INFO: netserver-0 started at 2021-10-23 00:50:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container webserver ready: true, restart count 0 Oct 23 00:50:58.306: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:50:58.306: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:50:58.306: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:50:58.306: INFO: rc-test-k8qzp started at 2021-10-23 00:50:48 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container rc-test ready: false, restart count 0 Oct 23 00:50:58.306: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 00:50:58.306: INFO: Container discover ready: false, restart count 0 Oct 23 00:50:58.306: INFO: Container init ready: false, restart count 0 Oct 23 00:50:58.306: INFO: Container install ready: false, restart count 0 Oct 23 00:50:58.306: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:50:58.306: INFO: Container nodereport ready: true, restart count 0 Oct 23 00:50:58.306: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:50:58.306: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:50:58.306: INFO: liveness-e9bcedec-4610-41c2-91aa-e15b5e2e736e started at 2021-10-23 00:50:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:50:58.306: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:50:58.306: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 00:50:58.306: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 00:50:58.306: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:50:58.306: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:50:58.306: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:50:58.306: INFO: test-container-pod started at 2021-10-23 00:50:51 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container webserver ready: false, restart count 0 Oct 23 00:50:58.306: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:50:58.306: INFO: adopt-release-4gfgm started at 2021-10-23 00:50:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container c ready: false, restart count 0 Oct 23 00:50:58.306: INFO: affinity-nodeport-62bdl started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:50:58.306: INFO: Container affinity-nodeport ready: true, restart count 0 W1023 00:50:58.319098 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:51:02.261: INFO: Latency metrics for node node1 Oct 23 00:51:02.261: INFO: Logging node info for node node2 Oct 23 00:51:02.264: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 72179 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:51:02.264: INFO: Logging kubelet events for node node2 Oct 23 00:51:02.266: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 00:51:02.281: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:51:02.281: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 00:51:02.281: INFO: Container discover ready: false, restart count 0 Oct 23 00:51:02.281: INFO: Container init ready: false, restart count 0 Oct 23 00:51:02.281: INFO: Container install ready: false, restart count 0 Oct 23 00:51:02.281: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 00:51:02.281: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:51:02.281: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:51:02.281: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:51:02.281: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:02.281: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:51:02.281: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:51:02.281: INFO: annotationupdate72a0ab84-4012-4afb-8b3c-9871c2dc779f started at 2021-10-23 00:50:48 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container client-container ready: true, restart count 0 Oct 23 00:51:02.281: INFO: downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa started at 2021-10-23 00:50:59 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container client-container ready: false, restart count 0 Oct 23 00:51:02.281: INFO: affinity-nodeport-tpnkv started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container affinity-nodeport ready: true, restart count 0 Oct 23 00:51:02.281: INFO: test-new-deployment-847dcfb7fb-bxkr7 started at 2021-10-23 00:50:58 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container httpd ready: true, restart count 0 Oct 23 00:51:02.281: INFO: externalname-service-szgbk started at 2021-10-23 00:48:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container externalname-service ready: true, restart count 0 Oct 23 00:51:02.281: INFO: adopt-release-4hbtv started at 2021-10-23 00:50:47 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container c ready: true, restart count 0 Oct 23 00:51:02.281: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:51:02.281: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 00:51:02.281: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container tas-extender ready: true, restart count 0 Oct 23 00:51:02.281: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:51:02.281: INFO: Container collectd ready: true, restart count 0 Oct 23 00:51:02.281: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:51:02.281: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:51:02.281: INFO: affinity-nodeport-mrgw7 started at 2021-10-23 00:48:37 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container affinity-nodeport ready: false, restart count 0 Oct 23 00:51:02.281: INFO: execpod2728q started at 2021-10-23 00:48:49 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:51:02.281: INFO: netserver-1 started at 2021-10-23 00:50:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container webserver ready: true, restart count 0 Oct 23 00:51:02.281: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:51:02.281: INFO: adopt-release-2klcw started at 2021-10-23 00:50:47 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container c ready: true, restart count 0 Oct 23 00:51:02.281: INFO: host-test-container-pod started at 2021-10-23 00:50:51 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:02.281: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:51:02.281: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:02.281: INFO: Container nodereport ready: true, restart count 1 Oct 23 00:51:02.281: INFO: Container reconcile ready: true, restart count 0 W1023 00:51:02.300774 35 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:51:02.655: INFO: Latency metrics for node node2 Oct 23 00:51:02.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3134" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [142.368 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:50:57.965: Unexpected error: <*errors.errorString | 0xc0025bb330>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31722 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31722 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":9,"skipped":298,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:59.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:50:59.093: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa" in namespace "projected-6806" to be "Succeeded or Failed" Oct 23 00:50:59.095: INFO: Pod "downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550869ms Oct 23 00:51:01.098: INFO: Pod "downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005271534s Oct 23 00:51:03.102: INFO: Pod "downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009272206s Oct 23 00:51:05.108: INFO: Pod "downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015647131s STEP: Saw pod success Oct 23 00:51:05.108: INFO: Pod "downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa" satisfied condition "Succeeded or Failed" Oct 23 00:51:05.111: INFO: Trying to get logs from node node2 pod downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa container client-container: STEP: delete the pod Oct 23 00:51:05.246: INFO: Waiting for pod downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa to disappear Oct 23 00:51:05.248: INFO: Pod downwardapi-volume-91f1492a-8653-4650-bd02-4696ee237afa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:05.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6806" for this suite. • [SLOW TEST:6.204 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:00.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-7786/secret-test-06227f7b-6f9a-4a44-b686-e295a486c735 STEP: Creating a pod to test consume secrets Oct 23 00:51:00.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887" in namespace "secrets-7786" to be "Succeeded or Failed" Oct 23 00:51:00.699: INFO: Pod "pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.651183ms Oct 23 00:51:02.701: INFO: Pod "pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005069492s Oct 23 00:51:04.705: INFO: Pod "pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008540524s Oct 23 00:51:06.710: INFO: Pod "pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014002396s STEP: Saw pod success Oct 23 00:51:06.710: INFO: Pod "pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887" satisfied condition "Succeeded or Failed" Oct 23 00:51:06.713: INFO: Trying to get logs from node node1 pod pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887 container env-test: STEP: delete the pod Oct 23 00:51:06.726: INFO: Waiting for pod pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887 to disappear Oct 23 00:51:06.729: INFO: Pod pod-configmaps-d538ddec-7551-4b87-8229-b08e04fe1887 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:06.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7786" for this suite. • [SLOW TEST:6.078 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":362,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:48.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:07.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7368" for this suite. • [SLOW TEST:19.026 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":24,"skipped":300,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:27.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-4731 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 23 00:50:27.548: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 23 00:50:27.581: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:50:29.585: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:50:31.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:50:33.586: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:50:35.584: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:50:37.587: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:50:39.585: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:50:41.586: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:50:43.588: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:50:45.585: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 23 00:50:47.585: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 23 00:50:47.590: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 23 00:50:49.594: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 23 00:50:51.597: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 23 00:51:05.630: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Oct 23 00:51:05.631: INFO: Going to poll 10.244.3.105 on port 8081 at least 0 times, with a maximum of 34 tries before failing Oct 23 00:51:05.632: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.105 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4731 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:51:05.632: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:51:06.713: INFO: Found all 1 expected endpoints: [netserver-0] Oct 23 00:51:06.713: INFO: Going to poll 10.244.4.128 on port 8081 at least 0 times, with a maximum of 34 tries before failing Oct 23 00:51:06.715: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.128 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4731 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:51:06.715: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:51:07.834: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:07.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4731" for this suite. • [SLOW TEST:40.316 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:48:37.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-3379 STEP: creating service affinity-nodeport in namespace services-3379 STEP: creating replication controller affinity-nodeport in namespace services-3379 I1023 00:48:37.888726 25 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-3379, replica count: 3 I1023 00:48:40.940618 25 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:43.941121 25 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:46.941345 25 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:48:49.942842 25 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:48:49.953: INFO: Creating new exec pod Oct 23 00:48:56.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Oct 23 00:48:57.233: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Oct 23 00:48:57.233: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:48:57.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.44.107 80' Oct 23 00:48:57.457: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.44.107 80\nConnection to 10.233.44.107 80 port [tcp/http] succeeded!\n" Oct 23 00:48:57.457: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:48:57.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:48:57.692: INFO: rc: 1 Oct 23 00:48:57.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:58.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:48:59.000: INFO: rc: 1 Oct 23 00:48:59.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:48:59.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:48:59.918: INFO: rc: 1 Oct 23 00:48:59.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:00.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:00.925: INFO: rc: 1 Oct 23 00:49:00.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:01.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:02.261: INFO: rc: 1 Oct 23 00:49:02.261: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:02.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:03.127: INFO: rc: 1 Oct 23 00:49:03.127: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:03.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:03.941: INFO: rc: 1 Oct 23 00:49:03.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:04.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:04.932: INFO: rc: 1 Oct 23 00:49:04.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:05.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:05.939: INFO: rc: 1 Oct 23 00:49:05.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:06.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:07.062: INFO: rc: 1 Oct 23 00:49:07.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:07.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:08.439: INFO: rc: 1 Oct 23 00:49:08.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:08.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:08.946: INFO: rc: 1 Oct 23 00:49:08.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31068 + echo hostName nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:09.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:09.981: INFO: rc: 1 Oct 23 00:49:09.981: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:10.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:11.018: INFO: rc: 1 Oct 23 00:49:11.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:11.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:12.112: INFO: rc: 1 Oct 23 00:49:12.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:12.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:13.004: INFO: rc: 1 Oct 23 00:49:13.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:13.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:14.311: INFO: rc: 1 Oct 23 00:49:14.311: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:14.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:14.947: INFO: rc: 1 Oct 23 00:49:14.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:15.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:15.925: INFO: rc: 1 Oct 23 00:49:15.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:16.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:16.949: INFO: rc: 1 Oct 23 00:49:16.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:17.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:18.037: INFO: rc: 1 Oct 23 00:49:18.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:18.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:19.486: INFO: rc: 1 Oct 23 00:49:19.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:19.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:19.986: INFO: rc: 1 Oct 23 00:49:19.986: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:20.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:20.947: INFO: rc: 1 Oct 23 00:49:20.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31068 + echo hostName nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:21.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:22.048: INFO: rc: 1 Oct 23 00:49:22.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:22.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:23.088: INFO: rc: 1 Oct 23 00:49:23.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + nc+ -v -techo -w 2 10.10.190.207 hostName 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:23.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:24.423: INFO: rc: 1 Oct 23 00:49:24.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:24.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:25.157: INFO: rc: 1 Oct 23 00:49:25.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:25.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:26.153: INFO: rc: 1 Oct 23 00:49:26.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:26.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:26.945: INFO: rc: 1 Oct 23 00:49:26.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:27.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:27.936: INFO: rc: 1 Oct 23 00:49:27.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:28.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:28.941: INFO: rc: 1 Oct 23 00:49:28.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:29.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:30.019: INFO: rc: 1 Oct 23 00:49:30.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:30.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:31.197: INFO: rc: 1 Oct 23 00:49:31.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:31.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:32.296: INFO: rc: 1 Oct 23 00:49:32.296: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:32.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:33.342: INFO: rc: 1 Oct 23 00:49:33.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:33.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:33.943: INFO: rc: 1 Oct 23 00:49:33.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:34.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:35.071: INFO: rc: 1 Oct 23 00:49:35.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:35.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:35.912: INFO: rc: 1 Oct 23 00:49:35.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:36.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:37.119: INFO: rc: 1 Oct 23 00:49:37.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:37.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:37.941: INFO: rc: 1 Oct 23 00:49:37.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:38.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:38.927: INFO: rc: 1 Oct 23 00:49:38.927: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:39.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:40.137: INFO: rc: 1 Oct 23 00:49:40.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:40.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:40.923: INFO: rc: 1 Oct 23 00:49:40.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:41.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:41.942: INFO: rc: 1 Oct 23 00:49:41.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:42.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:42.940: INFO: rc: 1 Oct 23 00:49:42.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:43.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:44.050: INFO: rc: 1 Oct 23 00:49:44.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:44.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:44.933: INFO: rc: 1 Oct 23 00:49:44.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:45.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:45.921: INFO: rc: 1 Oct 23 00:49:45.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:46.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:46.932: INFO: rc: 1 Oct 23 00:49:46.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:47.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:47.925: INFO: rc: 1 Oct 23 00:49:47.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:48.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:49.942: INFO: rc: 1 Oct 23 00:49:49.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:50.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:50.945: INFO: rc: 1 Oct 23 00:49:50.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:51.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:51.954: INFO: rc: 1 Oct 23 00:49:51.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:52.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:52.951: INFO: rc: 1 Oct 23 00:49:52.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:53.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:53.953: INFO: rc: 1 Oct 23 00:49:53.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:54.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:54.938: INFO: rc: 1 Oct 23 00:49:54.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:55.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:56.136: INFO: rc: 1 Oct 23 00:49:56.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:56.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:57.265: INFO: rc: 1 Oct 23 00:49:57.265: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:57.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:58.290: INFO: rc: 1 Oct 23 00:49:58.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:58.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:59.244: INFO: rc: 1 Oct 23 00:49:59.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:49:59.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:49:59.968: INFO: rc: 1 Oct 23 00:49:59.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:00.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:01.130: INFO: rc: 1 Oct 23 00:50:01.130: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:01.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:02.175: INFO: rc: 1 Oct 23 00:50:02.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31068 + echo hostName nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:02.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:03.015: INFO: rc: 1 Oct 23 00:50:03.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:03.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:04.119: INFO: rc: 1 Oct 23 00:50:04.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:04.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:05.008: INFO: rc: 1 Oct 23 00:50:05.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:05.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:06.175: INFO: rc: 1 Oct 23 00:50:06.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:06.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:07.096: INFO: rc: 1 Oct 23 00:50:07.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:07.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:07.936: INFO: rc: 1 Oct 23 00:50:07.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:08.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:09.048: INFO: rc: 1 Oct 23 00:50:09.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:09.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:09.972: INFO: rc: 1 Oct 23 00:50:09.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:10.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:10.954: INFO: rc: 1 Oct 23 00:50:10.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:11.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:12.044: INFO: rc: 1 Oct 23 00:50:12.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:12.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:12.968: INFO: rc: 1 Oct 23 00:50:12.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:13.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:13.933: INFO: rc: 1 Oct 23 00:50:13.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:14.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:14.995: INFO: rc: 1 Oct 23 00:50:14.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31068 + echo hostName nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:15.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:15.961: INFO: rc: 1 Oct 23 00:50:15.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:16.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:17.380: INFO: rc: 1 Oct 23 00:50:17.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:17.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:18.147: INFO: rc: 1 Oct 23 00:50:18.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:18.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:19.866: INFO: rc: 1 Oct 23 00:50:19.866: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:20.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:20.995: INFO: rc: 1 Oct 23 00:50:20.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:21.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:22.063: INFO: rc: 1 Oct 23 00:50:22.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:22.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:23.135: INFO: rc: 1 Oct 23 00:50:23.135: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:23.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:24.003: INFO: rc: 1 Oct 23 00:50:24.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:24.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:24.944: INFO: rc: 1 Oct 23 00:50:24.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:25.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:25.978: INFO: rc: 1 Oct 23 00:50:25.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:26.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:26.939: INFO: rc: 1 Oct 23 00:50:26.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:27.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:27.992: INFO: rc: 1 Oct 23 00:50:27.992: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:28.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:29.147: INFO: rc: 1 Oct 23 00:50:29.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:29.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:29.974: INFO: rc: 1 Oct 23 00:50:29.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:30.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:30.943: INFO: rc: 1 Oct 23 00:50:30.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:31.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:31.903: INFO: rc: 1 Oct 23 00:50:31.904: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:32.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:32.955: INFO: rc: 1 Oct 23 00:50:32.955: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:33.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:34.239: INFO: rc: 1 Oct 23 00:50:34.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:34.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:35.432: INFO: rc: 1 Oct 23 00:50:35.433: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:35.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:35.984: INFO: rc: 1 Oct 23 00:50:35.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:36.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:36.979: INFO: rc: 1 Oct 23 00:50:36.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:37.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:37.938: INFO: rc: 1 Oct 23 00:50:37.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:38.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:38.949: INFO: rc: 1 Oct 23 00:50:38.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:39.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:39.931: INFO: rc: 1 Oct 23 00:50:39.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:40.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:40.930: INFO: rc: 1 Oct 23 00:50:40.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:41.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:41.944: INFO: rc: 1 Oct 23 00:50:41.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:42.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:42.933: INFO: rc: 1 Oct 23 00:50:42.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:43.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:44.034: INFO: rc: 1 Oct 23 00:50:44.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:44.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:44.993: INFO: rc: 1 Oct 23 00:50:44.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:45.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:45.944: INFO: rc: 1 Oct 23 00:50:45.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:46.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:46.944: INFO: rc: 1 Oct 23 00:50:46.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:47.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:47.943: INFO: rc: 1 Oct 23 00:50:47.943: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:48.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:50.094: INFO: rc: 1 Oct 23 00:50:50.094: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:50.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:51.124: INFO: rc: 1 Oct 23 00:50:51.124: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:51.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:52.234: INFO: rc: 1 Oct 23 00:50:52.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:52.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:52.948: INFO: rc: 1 Oct 23 00:50:52.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:53.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:53.961: INFO: rc: 1 Oct 23 00:50:53.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:54.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:54.942: INFO: rc: 1 Oct 23 00:50:54.942: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:55.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:55.935: INFO: rc: 1 Oct 23 00:50:55.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:56.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:56.951: INFO: rc: 1 Oct 23 00:50:56.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:57.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:57.923: INFO: rc: 1 Oct 23 00:50:57.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:57.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068' Oct 23 00:50:58.154: INFO: rc: 1 Oct 23 00:50:58.154: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3379 exec execpod-affinityr5cmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31068: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31068 nc: connect to 10.10.190.207 port 31068 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:50:58.155: FAIL: Unexpected error: <*errors.errorString | 0xc005a24650>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31068 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31068 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000ec66e0, 0x779f8f8, 0xc004d1e6e0, 0xc00450b180, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001be7380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001be7380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001be7380, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 00:50:58.156: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-3379, will wait for the garbage collector to delete the pods Oct 23 00:50:58.231: INFO: Deleting ReplicationController affinity-nodeport took: 4.005885ms Oct 23 00:50:58.331: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.820487ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3379". STEP: Found 27 events. Oct 23 00:51:06.748: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-62bdl: { } Scheduled: Successfully assigned services-3379/affinity-nodeport-62bdl to node1 Oct 23 00:51:06.748: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-mrgw7: { } Scheduled: Successfully assigned services-3379/affinity-nodeport-mrgw7 to node2 Oct 23 00:51:06.748: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-tpnkv: { } Scheduled: Successfully assigned services-3379/affinity-nodeport-tpnkv to node2 Oct 23 00:51:06.748: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityr5cmp: { } Scheduled: Successfully assigned services-3379/execpod-affinityr5cmp to node2 Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:37 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-tpnkv Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:37 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-62bdl Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:37 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-mrgw7 Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:41 +0000 UTC - event for affinity-nodeport-mrgw7: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:41 +0000 UTC - event for affinity-nodeport-mrgw7: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 297.725931ms Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:41 +0000 UTC - event for affinity-nodeport-tpnkv: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:42 +0000 UTC - event for affinity-nodeport-mrgw7: {kubelet node2} Started: Started container affinity-nodeport Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:42 +0000 UTC - event for affinity-nodeport-mrgw7: {kubelet node2} Created: Created container affinity-nodeport Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:42 +0000 UTC - event for affinity-nodeport-tpnkv: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 454.452258ms Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:42 +0000 UTC - event for affinity-nodeport-tpnkv: {kubelet node2} Created: Created container affinity-nodeport Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:43 +0000 UTC - event for affinity-nodeport-tpnkv: {kubelet node2} Started: Started container affinity-nodeport Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:45 +0000 UTC - event for affinity-nodeport-62bdl: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:46 +0000 UTC - event for affinity-nodeport-62bdl: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 470.556489ms Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:46 +0000 UTC - event for affinity-nodeport-62bdl: {kubelet node1} Created: Created container affinity-nodeport Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:46 +0000 UTC - event for affinity-nodeport-62bdl: {kubelet node1} Started: Started container affinity-nodeport Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:52 +0000 UTC - event for execpod-affinityr5cmp: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:52 +0000 UTC - event for execpod-affinityr5cmp: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 310.528265ms Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:52 +0000 UTC - event for execpod-affinityr5cmp: {kubelet node2} Created: Created container agnhost-container Oct 23 00:51:06.748: INFO: At 2021-10-23 00:48:53 +0000 UTC - event for execpod-affinityr5cmp: {kubelet node2} Started: Started container agnhost-container Oct 23 00:51:06.748: INFO: At 2021-10-23 00:50:58 +0000 UTC - event for affinity-nodeport-mrgw7: {kubelet node2} Killing: Stopping container affinity-nodeport Oct 23 00:51:06.748: INFO: At 2021-10-23 00:50:58 +0000 UTC - event for affinity-nodeport-tpnkv: {kubelet node2} Killing: Stopping container affinity-nodeport Oct 23 00:51:06.748: INFO: At 2021-10-23 00:50:58 +0000 UTC - event for execpod-affinityr5cmp: {kubelet node2} Killing: Stopping container agnhost-container Oct 23 00:51:06.748: INFO: At 2021-10-23 00:50:59 +0000 UTC - event for affinity-nodeport-62bdl: {kubelet node1} Killing: Stopping container affinity-nodeport Oct 23 00:51:06.750: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:51:06.750: INFO: Oct 23 00:51:06.753: INFO: Logging node info for node master1 Oct 23 00:51:06.756: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 72310 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:04 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:04 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:04 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:51:04 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:51:06.757: INFO: Logging kubelet events for node master1 Oct 23 00:51:06.759: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 00:51:06.780: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.780: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:51:06.780: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.780: INFO: Container coredns ready: true, restart count 2 Oct 23 00:51:06.780: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:06.780: INFO: Container docker-registry ready: true, restart count 0 Oct 23 00:51:06.780: INFO: Container nginx ready: true, restart count 0 Oct 23 00:51:06.780: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:06.780: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:51:06.780: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:51:06.780: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.780: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:51:06.780: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.780: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 00:51:06.780: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.780: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:51:06.780: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:51:06.780: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:51:06.780: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:51:06.780: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.780: INFO: Container kube-scheduler ready: true, restart count 0 W1023 00:51:06.794742 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:51:06.872: INFO: Latency metrics for node master1 Oct 23 00:51:06.872: INFO: Logging node info for node master2 Oct 23 00:51:06.875: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 72162 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:51:06.875: INFO: Logging kubelet events for node master2 Oct 23 00:51:06.878: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 00:51:06.886: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.886: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:51:06.886: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.886: INFO: Container autoscaler ready: true, restart count 1 Oct 23 00:51:06.886: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:06.886: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:51:06.886: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:51:06.886: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.886: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:51:06.886: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.886: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:51:06.886: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.886: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:51:06.886: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:51:06.886: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:51:06.886: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:51:06.886: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.886: INFO: Container kube-multus ready: true, restart count 1 W1023 00:51:06.901786 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:51:06.969: INFO: Latency metrics for node master2 Oct 23 00:51:06.969: INFO: Logging node info for node master3 Oct 23 00:51:06.972: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 72286 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:51:04 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:51:06.972: INFO: Logging kubelet events for node master3 Oct 23 00:51:06.974: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 00:51:06.984: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.984: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:51:06.984: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.984: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:51:06.984: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.984: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:51:06.984: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:51:06.984: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:51:06.984: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:51:06.984: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.984: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 00:51:06.984: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:06.984: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:51:06.984: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:51:06.984: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.984: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:51:06.984: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.984: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:51:06.984: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:06.984: INFO: Container coredns ready: true, restart count 2 W1023 00:51:06.999338 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:51:07.065: INFO: Latency metrics for node master3 Oct 23 00:51:07.065: INFO: Logging node info for node node1 Oct 23 00:51:07.069: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 72145 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:17:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:59 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:59 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:50:59 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:50:59 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:51:07.069: INFO: Logging kubelet events for node node1 Oct 23 00:51:07.071: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 00:51:07.088: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.088: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 00:51:07.088: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 00:51:07.089: INFO: Container config-reloader ready: true, restart count 0 Oct 23 00:51:07.089: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 00:51:07.089: INFO: Container grafana ready: true, restart count 0 Oct 23 00:51:07.089: INFO: Container prometheus ready: true, restart count 1 Oct 23 00:51:07.089: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:51:07.089: INFO: Container collectd ready: true, restart count 0 Oct 23 00:51:07.089: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:51:07.089: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:51:07.089: INFO: externalname-service-9rwls started at 2021-10-23 00:48:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container externalname-service ready: true, restart count 0 Oct 23 00:51:07.089: INFO: forbid-27249171-rpsp4 started at 2021-10-23 00:51:00 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container c ready: true, restart count 0 Oct 23 00:51:07.089: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:07.089: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:51:07.089: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 00:51:07.089: INFO: netserver-0 started at 2021-10-23 00:50:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container webserver ready: true, restart count 0 Oct 23 00:51:07.089: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:51:07.089: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:51:07.089: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:51:07.089: INFO: rc-test-k8qzp started at 2021-10-23 00:50:48 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container rc-test ready: true, restart count 0 Oct 23 00:51:07.089: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 00:51:07.089: INFO: Container discover ready: false, restart count 0 Oct 23 00:51:07.089: INFO: Container init ready: false, restart count 0 Oct 23 00:51:07.089: INFO: Container install ready: false, restart count 0 Oct 23 00:51:07.089: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:07.089: INFO: Container nodereport ready: true, restart count 0 Oct 23 00:51:07.089: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:51:07.089: INFO: rc-test-ghwq7 started at 2021-10-23 00:51:02 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container rc-test ready: false, restart count 0 Oct 23 00:51:07.089: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:51:07.089: INFO: downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988 started at 2021-10-23 00:51:06 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container client-container ready: false, restart count 0 Oct 23 00:51:07.089: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:51:07.089: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 00:51:07.089: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 00:51:07.089: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:07.089: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:51:07.089: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:51:07.089: INFO: test-container-pod started at 2021-10-23 00:50:51 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container webserver ready: true, restart count 0 Oct 23 00:51:07.089: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:51:07.089: INFO: adopt-release-4gfgm started at 2021-10-23 00:50:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.089: INFO: Container c ready: true, restart count 0 W1023 00:51:07.100241 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:51:07.496: INFO: Latency metrics for node node1 Oct 23 00:51:07.496: INFO: Logging node info for node node2 Oct 23 00:51:07.498: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 72179 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:51:00 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:51:07.500: INFO: Logging kubelet events for node node2 Oct 23 00:51:07.504: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 00:51:07.590: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:51:07.590: INFO: adopt-release-2klcw started at 2021-10-23 00:50:47 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container c ready: true, restart count 0 Oct 23 00:51:07.590: INFO: host-test-container-pod started at 2021-10-23 00:50:51 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:51:07.590: INFO: bin-false8c1d9b10-d19b-418e-a9cb-fd5979c0d934 started at 2021-10-23 00:51:02 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container bin-false8c1d9b10-d19b-418e-a9cb-fd5979c0d934 ready: false, restart count 0 Oct 23 00:51:07.590: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:07.590: INFO: Container nodereport ready: true, restart count 1 Oct 23 00:51:07.590: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:51:07.590: INFO: pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012 started at (0+0 container statuses recorded) Oct 23 00:51:07.590: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:51:07.590: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 00:51:07.590: INFO: Container discover ready: false, restart count 0 Oct 23 00:51:07.590: INFO: Container init ready: false, restart count 0 Oct 23 00:51:07.590: INFO: Container install ready: false, restart count 0 Oct 23 00:51:07.590: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 00:51:07.590: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:51:07.590: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:51:07.590: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:51:07.590: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:51:07.590: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:51:07.590: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:51:07.590: INFO: annotationupdate72a0ab84-4012-4afb-8b3c-9871c2dc779f started at 2021-10-23 00:50:48 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.590: INFO: Container client-container ready: true, restart count 0 Oct 23 00:51:07.590: INFO: externalname-service-szgbk started at 2021-10-23 00:48:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.591: INFO: Container externalname-service ready: true, restart count 0 Oct 23 00:51:07.591: INFO: adopt-release-4hbtv started at 2021-10-23 00:50:47 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.591: INFO: Container c ready: true, restart count 0 Oct 23 00:51:07.591: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:51:07.591: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:51:07.591: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 00:51:07.591: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.591: INFO: Container tas-extender ready: true, restart count 0 Oct 23 00:51:07.591: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:51:07.591: INFO: Container collectd ready: true, restart count 0 Oct 23 00:51:07.591: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:51:07.591: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:51:07.591: INFO: execpod2728q started at 2021-10-23 00:48:49 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.591: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:51:07.591: INFO: netserver-1 started at 2021-10-23 00:50:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.591: INFO: Container webserver ready: true, restart count 0 Oct 23 00:51:07.591: INFO: pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960 started at 2021-10-23 00:51:02 +0000 UTC (0+1 container statuses recorded) Oct 23 00:51:07.591: INFO: Container secret-volume-test ready: false, restart count 0 W1023 00:51:07.604879 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:51:08.256: INFO: Latency metrics for node node2 Oct 23 00:51:08.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3379" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [150.412 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:50:58.155: Unexpected error: <*errors.errorString | 0xc005a24650>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31068 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31068 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":579,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:02.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-a3189d1a-3cfb-4488-bc8e-70dba9c9941d STEP: Creating a pod to test consume secrets Oct 23 00:51:02.413: INFO: Waiting up to 5m0s for pod "pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960" in namespace "secrets-2831" to be "Succeeded or Failed" Oct 23 00:51:02.415: INFO: Pod "pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08445ms Oct 23 00:51:04.421: INFO: Pod "pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007519669s Oct 23 00:51:06.424: INFO: Pod "pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010924715s Oct 23 00:51:08.427: INFO: Pod "pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013627806s Oct 23 00:51:10.430: INFO: Pod "pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016996318s STEP: Saw pod success Oct 23 00:51:10.430: INFO: Pod "pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960" satisfied condition "Succeeded or Failed" Oct 23 00:51:10.432: INFO: Trying to get logs from node node2 pod pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960 container secret-volume-test: STEP: delete the pod Oct 23 00:51:10.446: INFO: Waiting for pod pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960 to disappear Oct 23 00:51:10.448: INFO: Pod pod-secrets-a33d8e28-9a73-4278-9b38-b00c75944960 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:10.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2831" for this suite. • [SLOW TEST:8.076 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":547,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:02.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:10.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4872" for this suite. • [SLOW TEST:8.053 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":325,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:06.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:51:06.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988" in namespace "downward-api-480" to be "Succeeded or Failed" Oct 23 00:51:06.811: INFO: Pod "downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612837ms Oct 23 00:51:08.815: INFO: Pod "downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007308889s Oct 23 00:51:10.818: INFO: Pod "downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010236239s STEP: Saw pod success Oct 23 00:51:10.818: INFO: Pod "downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988" satisfied condition "Succeeded or Failed" Oct 23 00:51:10.821: INFO: Trying to get logs from node node1 pod downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988 container client-container: STEP: delete the pod Oct 23 00:51:10.833: INFO: Waiting for pod downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988 to disappear Oct 23 00:51:10.835: INFO: Pod downwardapi-volume-c2963dc0-f1d7-4112-930f-cb6aaf78a988 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:10.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-480" for this suite. • ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:07.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 00:51:11.371: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:11.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9267" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":322,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:05.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-1608d2f1-e739-4a18-ac8d-5c4e9e7d5326 STEP: Creating a pod to test consume configMaps Oct 23 00:51:05.448: INFO: Waiting up to 5m0s for pod "pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012" in namespace "configmap-9786" to be "Succeeded or Failed" Oct 23 00:51:05.451: INFO: Pod "pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.652153ms Oct 23 00:51:07.454: INFO: Pod "pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006421279s Oct 23 00:51:09.457: INFO: Pod "pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008757702s Oct 23 00:51:11.460: INFO: Pod "pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012428038s Oct 23 00:51:13.464: INFO: Pod "pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015964137s Oct 23 00:51:15.466: INFO: Pod "pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.01851079s STEP: Saw pod success Oct 23 00:51:15.466: INFO: Pod "pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012" satisfied condition "Succeeded or Failed" Oct 23 00:51:15.469: INFO: Trying to get logs from node node2 pod pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012 container agnhost-container: STEP: delete the pod Oct 23 00:51:15.480: INFO: Waiting for pod pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012 to disappear Oct 23 00:51:15.482: INFO: Pod pod-configmaps-c89fc870-9fa8-40f3-b6f4-ba1a709b0012 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:15.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9786" for this suite. • [SLOW TEST:10.169 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":293,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:10.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:51:10.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a" in namespace "downward-api-9476" to be "Succeeded or Failed" Oct 23 00:51:10.502: INFO: Pod "downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477818ms Oct 23 00:51:12.506: INFO: Pod "downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00646593s Oct 23 00:51:14.511: INFO: Pod "downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011611825s Oct 23 00:51:16.517: INFO: Pod "downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017646706s Oct 23 00:51:18.522: INFO: Pod "downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.023229957s STEP: Saw pod success Oct 23 00:51:18.523: INFO: Pod "downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a" satisfied condition "Succeeded or Failed" Oct 23 00:51:18.525: INFO: Trying to get logs from node node2 pod downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a container client-container: STEP: delete the pod Oct 23 00:51:18.537: INFO: Waiting for pod downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a to disappear Oct 23 00:51:18.539: INFO: Pod downwardapi-volume-88f073d3-2dc5-4695-a09f-500b1ea5091a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:18.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9476" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":552,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:18.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Oct 23 00:51:18.603: INFO: created test-event-1 Oct 23 00:51:18.606: INFO: created test-event-2 Oct 23 00:51:18.609: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 23 00:51:18.614: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 23 00:51:18.635: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:18.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7081" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":25,"skipped":569,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:10.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 23 00:51:11.149: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 23 00:51:13.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:51:15.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:51:17.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 23 00:51:19.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770547071, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 23 00:51:22.170: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:22.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3316" for this suite. STEP: Destroying namespace "webhook-3316-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.455 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":11,"skipped":331,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:24.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:50:24.124: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:25.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6365" for this suite. • [SLOW TEST:61.304 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":23,"skipped":376,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:11.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Oct 23 00:51:11.455: INFO: Waiting up to 5m0s for pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d" in namespace "emptydir-1736" to be "Succeeded or Failed" Oct 23 00:51:11.458: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.900806ms Oct 23 00:51:13.461: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00598057s Oct 23 00:51:15.465: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009227991s Oct 23 00:51:17.469: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013250625s Oct 23 00:51:19.473: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017564191s Oct 23 00:51:21.475: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020101542s Oct 23 00:51:23.480: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025069613s Oct 23 00:51:25.485: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.030102311s Oct 23 00:51:27.489: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.033260455s STEP: Saw pod success Oct 23 00:51:27.489: INFO: Pod "pod-007e2462-3d14-444e-8c5a-b379c9f62a8d" satisfied condition "Succeeded or Failed" Oct 23 00:51:27.491: INFO: Trying to get logs from node node1 pod pod-007e2462-3d14-444e-8c5a-b379c9f62a8d container test-container: STEP: delete the pod Oct 23 00:51:27.506: INFO: Waiting for pod pod-007e2462-3d14-444e-8c5a-b379c9f62a8d to disappear Oct 23 00:51:27.511: INFO: Pod pod-007e2462-3d14-444e-8c5a-b379c9f62a8d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:27.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1736" for this suite. • [SLOW TEST:16.111 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":337,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:27.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:27.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1219" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":27,"skipped":341,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":382,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:10.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Oct 23 00:51:10.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 create -f -' Oct 23 00:51:11.213: INFO: stderr: "" Oct 23 00:51:11.214: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 23 00:51:11.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:51:11.384: INFO: stderr: "" Oct 23 00:51:11.384: INFO: stdout: "update-demo-nautilus-jpbf2 update-demo-nautilus-x2hvf " Oct 23 00:51:11.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods update-demo-nautilus-jpbf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:51:11.534: INFO: stderr: "" Oct 23 00:51:11.534: INFO: stdout: "" Oct 23 00:51:11.534: INFO: update-demo-nautilus-jpbf2 is created but not running Oct 23 00:51:16.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:51:16.700: INFO: stderr: "" Oct 23 00:51:16.700: INFO: stdout: "update-demo-nautilus-jpbf2 update-demo-nautilus-x2hvf " Oct 23 00:51:16.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods update-demo-nautilus-jpbf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:51:16.860: INFO: stderr: "" Oct 23 00:51:16.860: INFO: stdout: "" Oct 23 00:51:16.860: INFO: update-demo-nautilus-jpbf2 is created but not running Oct 23 00:51:21.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:51:22.016: INFO: stderr: "" Oct 23 00:51:22.017: INFO: stdout: "update-demo-nautilus-jpbf2 update-demo-nautilus-x2hvf " Oct 23 00:51:22.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods update-demo-nautilus-jpbf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:51:22.162: INFO: stderr: "" Oct 23 00:51:22.162: INFO: stdout: "" Oct 23 00:51:22.162: INFO: update-demo-nautilus-jpbf2 is created but not running Oct 23 00:51:27.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Oct 23 00:51:27.324: INFO: stderr: "" Oct 23 00:51:27.324: INFO: stdout: "update-demo-nautilus-jpbf2 update-demo-nautilus-x2hvf " Oct 23 00:51:27.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods update-demo-nautilus-jpbf2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:51:27.489: INFO: stderr: "" Oct 23 00:51:27.489: INFO: stdout: "true" Oct 23 00:51:27.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods update-demo-nautilus-jpbf2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 00:51:27.652: INFO: stderr: "" Oct 23 00:51:27.652: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 00:51:27.652: INFO: validating pod update-demo-nautilus-jpbf2 Oct 23 00:51:27.656: INFO: got data: { "image": "nautilus.jpg" } Oct 23 00:51:27.656: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 00:51:27.656: INFO: update-demo-nautilus-jpbf2 is verified up and running Oct 23 00:51:27.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods update-demo-nautilus-x2hvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Oct 23 00:51:27.815: INFO: stderr: "" Oct 23 00:51:27.815: INFO: stdout: "true" Oct 23 00:51:27.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods update-demo-nautilus-x2hvf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Oct 23 00:51:27.983: INFO: stderr: "" Oct 23 00:51:27.983: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Oct 23 00:51:27.983: INFO: validating pod update-demo-nautilus-x2hvf Oct 23 00:51:27.986: INFO: got data: { "image": "nautilus.jpg" } Oct 23 00:51:27.986: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 23 00:51:27.986: INFO: update-demo-nautilus-x2hvf is verified up and running STEP: using delete to clean up resources Oct 23 00:51:27.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 delete --grace-period=0 --force -f -' Oct 23 00:51:28.103: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:51:28.103: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 23 00:51:28.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get rc,svc -l name=update-demo --no-headers' Oct 23 00:51:28.304: INFO: stderr: "No resources found in kubectl-1263 namespace.\n" Oct 23 00:51:28.304: INFO: stdout: "" Oct 23 00:51:28.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1263 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 23 00:51:28.485: INFO: stderr: "" Oct 23 00:51:28.485: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:28.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1263" for this suite. • [SLOW TEST:17.648 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":22,"skipped":382,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:18.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-feb8069d-830b-4cf3-abf7-31197786e3d7 Oct 23 00:51:18.696: INFO: Pod name my-hostname-basic-feb8069d-830b-4cf3-abf7-31197786e3d7: Found 0 pods out of 1 Oct 23 00:51:23.703: INFO: Pod name my-hostname-basic-feb8069d-830b-4cf3-abf7-31197786e3d7: Found 1 pods out of 1 Oct 23 00:51:23.703: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-feb8069d-830b-4cf3-abf7-31197786e3d7" are running Oct 23 00:51:23.708: INFO: Pod "my-hostname-basic-feb8069d-830b-4cf3-abf7-31197786e3d7-shhrk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 00:51:19 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 00:51:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 00:51:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 00:51:18 +0000 UTC Reason: Message:}]) Oct 23 00:51:23.708: INFO: Trying to dial the pod Oct 23 00:51:28.717: INFO: Controller my-hostname-basic-feb8069d-830b-4cf3-abf7-31197786e3d7: Got expected result from replica 1 [my-hostname-basic-feb8069d-830b-4cf3-abf7-31197786e3d7-shhrk]: "my-hostname-basic-feb8069d-830b-4cf3-abf7-31197786e3d7-shhrk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:28.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4880" for this suite. • [SLOW TEST:10.056 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":26,"skipped":581,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:22.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 23 00:51:22.285: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:30.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-21" for this suite. • [SLOW TEST:8.170 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":341,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:28.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:32.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5302" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":593,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:30.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:51:30.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1364 create -f -' Oct 23 00:51:30.845: INFO: stderr: "" Oct 23 00:51:30.845: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 23 00:51:30.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1364 create -f -' Oct 23 00:51:31.138: INFO: stderr: "" Oct 23 00:51:31.138: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 23 00:51:32.143: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:32.143: INFO: Found 0 / 1 Oct 23 00:51:33.141: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:33.141: INFO: Found 0 / 1 Oct 23 00:51:34.141: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:34.141: INFO: Found 1 / 1 Oct 23 00:51:34.141: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 23 00:51:34.143: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:34.143: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 23 00:51:34.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1364 describe pod agnhost-primary-2c6wr' Oct 23 00:51:34.324: INFO: stderr: "" Oct 23 00:51:34.324: INFO: stdout: "Name: agnhost-primary-2c6wr\nNamespace: kubectl-1364\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Sat, 23 Oct 2021 00:51:30 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.128\"\n ],\n \"mac\": \"02:93:7f:ba:1e:09\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.128\"\n ],\n \"mac\": \"02:93:7f:ba:1e:09\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.128\nIPs:\n IP: 10.244.3.128\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://633e14cda1ffee9750c0963011cb53c35c6f888575d36c5b3c0e2a02a113c9b7\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 23 Oct 2021 00:51:32 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dmhsm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-dmhsm:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-1364/agnhost-primary-2c6wr to node1\n Normal Pulling 2s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 332.204906ms\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Oct 23 00:51:34.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1364 describe rc agnhost-primary' Oct 23 00:51:34.504: INFO: stderr: "" Oct 23 00:51:34.504: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1364\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-2c6wr\n" Oct 23 00:51:34.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1364 describe service agnhost-primary' Oct 23 00:51:34.657: INFO: stderr: "" Oct 23 00:51:34.657: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1364\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.13.165\nIPs: 10.233.13.165\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.128:6379\nSession Affinity: None\nEvents: \n" Oct 23 00:51:34.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1364 describe node master1' Oct 23 00:51:34.869: INFO: stderr: "" Oct 23 00:51:34.869: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 22 Oct 2021 21:03:37 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Sat, 23 Oct 2021 00:51:31 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 22 Oct 2021 21:09:07 +0000 Fri, 22 Oct 2021 21:09:07 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Sat, 23 Oct 2021 00:51:34 +0000 Fri, 22 Oct 2021 21:03:34 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 23 Oct 2021 00:51:34 +0000 Fri, 22 Oct 2021 21:03:34 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 23 Oct 2021 00:51:34 +0000 Fri, 22 Oct 2021 21:03:34 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 23 Oct 2021 00:51:34 +0000 Fri, 22 Oct 2021 21:09:03 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 439913340Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518324Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 405424133473\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629492Ki\n pods: 110\nSystem Info:\n Machine ID: 30ce143f9c9243b59253027a77cdbf77\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: e78651c4-73ca-42e7-8083-bc7c7ebac4b6\n Kernel Version: 3.10.0-1160.45.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.9\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-wtz5j 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h40m\n kube-system coredns-8474476ff8-q8d8x 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 3h44m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 3h38m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 3h46m\n kube-system kube-flannel-8vnf2 150m (0%) 300m (0%) 64M (0%) 500M (0%) 3h45m\n kube-system kube-multus-ds-amd64-vl8qj 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 3h45m\n kube-system kube-proxy-fhqkt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h46m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3h29m\n monitoring node-exporter-fxb7q 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 3h32m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 23 00:51:34.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1364 describe namespace kubectl-1364' Oct 23 00:51:35.040: INFO: stderr: "" Oct 23 00:51:35.040: INFO: stdout: "Name: kubectl-1364\nLabels: e2e-framework=kubectl\n e2e-run=53658ac9-7e6a-4b18-85cf-7e1466a353ff\n kubernetes.io/metadata.name=kubectl-1364\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:35.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1364" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":13,"skipped":354,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:32.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:51:32.829: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:40.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8950" for this suite. • [SLOW TEST:8.147 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":28,"skipped":596,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:15.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 23 00:51:15.558: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:51:24.101: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:43.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9100" for this suite. • [SLOW TEST:28.003 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":25,"skipped":306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:35.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Oct 23 00:51:35.090: INFO: namespace kubectl-8461 Oct 23 00:51:35.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8461 create -f -' Oct 23 00:51:35.464: INFO: stderr: "" Oct 23 00:51:35.464: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 23 00:51:36.467: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:36.467: INFO: Found 0 / 1 Oct 23 00:51:37.468: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:37.468: INFO: Found 0 / 1 Oct 23 00:51:38.467: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:38.468: INFO: Found 0 / 1 Oct 23 00:51:39.467: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:39.467: INFO: Found 1 / 1 Oct 23 00:51:39.467: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 23 00:51:39.470: INFO: Selector matched 1 pods for map[app:agnhost] Oct 23 00:51:39.470: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 23 00:51:39.470: INFO: wait on agnhost-primary startup in kubectl-8461 Oct 23 00:51:39.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8461 logs agnhost-primary-mqx2f agnhost-primary' Oct 23 00:51:39.643: INFO: stderr: "" Oct 23 00:51:39.643: INFO: stdout: "Paused\n" STEP: exposing RC Oct 23 00:51:39.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8461 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Oct 23 00:51:39.832: INFO: stderr: "" Oct 23 00:51:39.832: INFO: stdout: "service/rm2 exposed\n" Oct 23 00:51:39.834: INFO: Service rm2 in namespace kubectl-8461 found. STEP: exposing service Oct 23 00:51:41.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8461 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Oct 23 00:51:42.015: INFO: stderr: "" Oct 23 00:51:42.015: INFO: stdout: "service/rm3 exposed\n" Oct 23 00:51:42.018: INFO: Service rm3 in namespace kubectl-8461 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:44.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8461" for this suite. • [SLOW TEST:8.963 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":14,"skipped":365,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:44.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Oct 23 00:51:44.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9433 create -f -' Oct 23 00:51:44.463: INFO: stderr: "" Oct 23 00:51:44.463: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 23 00:51:44.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9433 diff -f -' Oct 23 00:51:44.771: INFO: rc: 1 Oct 23 00:51:44.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9433 delete -f -' Oct 23 00:51:44.890: INFO: stderr: "" Oct 23 00:51:44.890: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:44.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9433" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":15,"skipped":378,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:41.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 23 00:51:41.133: INFO: Waiting up to 5m0s for pod "pod-760a42da-5d05-4ae1-b402-3434b7422d20" in namespace "emptydir-3311" to be "Succeeded or Failed" Oct 23 00:51:41.143: INFO: Pod "pod-760a42da-5d05-4ae1-b402-3434b7422d20": Phase="Pending", Reason="", readiness=false. Elapsed: 9.367157ms Oct 23 00:51:43.146: INFO: Pod "pod-760a42da-5d05-4ae1-b402-3434b7422d20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013089856s Oct 23 00:51:45.151: INFO: Pod "pod-760a42da-5d05-4ae1-b402-3434b7422d20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017654128s STEP: Saw pod success Oct 23 00:51:45.151: INFO: Pod "pod-760a42da-5d05-4ae1-b402-3434b7422d20" satisfied condition "Succeeded or Failed" Oct 23 00:51:45.154: INFO: Trying to get logs from node node2 pod pod-760a42da-5d05-4ae1-b402-3434b7422d20 container test-container: STEP: delete the pod Oct 23 00:51:45.269: INFO: Waiting for pod pod-760a42da-5d05-4ae1-b402-3434b7422d20 to disappear Oct 23 00:51:45.273: INFO: Pod pod-760a42da-5d05-4ae1-b402-3434b7422d20 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:45.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3311" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":681,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:43.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-e26582d6-41c4-4890-a804-ba0d602dabe0 STEP: Creating a pod to test consume secrets Oct 23 00:51:43.678: INFO: Waiting up to 5m0s for pod "pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0" in namespace "secrets-4862" to be "Succeeded or Failed" Oct 23 00:51:43.682: INFO: Pod "pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.747245ms Oct 23 00:51:45.685: INFO: Pod "pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006310807s Oct 23 00:51:47.688: INFO: Pod "pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009714105s Oct 23 00:51:49.691: INFO: Pod "pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012564749s Oct 23 00:51:51.696: INFO: Pod "pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017765623s STEP: Saw pod success Oct 23 00:51:51.696: INFO: Pod "pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0" satisfied condition "Succeeded or Failed" Oct 23 00:51:51.699: INFO: Trying to get logs from node node2 pod pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0 container secret-volume-test: STEP: delete the pod Oct 23 00:51:51.723: INFO: Waiting for pod pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0 to disappear Oct 23 00:51:51.725: INFO: Pod pod-secrets-81bbc8c4-cbcc-4bfb-809e-5b39a714edc0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:51.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4862" for this suite. • [SLOW TEST:8.097 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":366,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:51.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:51.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2969" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":27,"skipped":370,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:44.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:51.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5543" for this suite. • [SLOW TEST:7.045 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":16,"skipped":389,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:51.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Oct 23 00:51:52.002: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4665 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:52.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4665" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":17,"skipped":401,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:27.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 23 00:51:27.663: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:51:36.259: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:55.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1868" for this suite. • [SLOW TEST:27.879 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":28,"skipped":355,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:55.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:55.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7552" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":29,"skipped":373,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:28.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:56.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8287" for this suite. • [SLOW TEST:28.061 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":23,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:25.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:51:57.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4774" for this suite. • [SLOW TEST:32.225 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":388,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:57.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Oct 23 00:51:57.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8902 create -f -' Oct 23 00:51:58.070: INFO: stderr: "" Oct 23 00:51:58.070: INFO: stdout: "pod/pause created\n" Oct 23 00:51:58.070: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 23 00:51:58.071: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8902" to be "running and ready" Oct 23 00:51:58.073: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.677281ms Oct 23 00:52:00.078: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007179693s Oct 23 00:52:02.084: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.01363858s Oct 23 00:52:02.084: INFO: Pod "pause" satisfied condition "running and ready" Oct 23 00:52:02.084: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Oct 23 00:52:02.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8902 label pods pause testing-label=testing-label-value' Oct 23 00:52:02.264: INFO: stderr: "" Oct 23 00:52:02.264: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 23 00:52:02.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8902 get pod pause -L testing-label' Oct 23 00:52:02.431: INFO: stderr: "" Oct 23 00:52:02.432: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 23 00:52:02.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8902 label pods pause testing-label-' Oct 23 00:52:02.587: INFO: stderr: "" Oct 23 00:52:02.587: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 23 00:52:02.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8902 get pod pause -L testing-label' Oct 23 00:52:02.733: INFO: stderr: "" Oct 23 00:52:02.733: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Oct 23 00:52:02.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8902 delete --grace-period=0 --force -f -' Oct 23 00:52:02.870: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 23 00:52:02.870: INFO: stdout: "pod \"pause\" force deleted\n" Oct 23 00:52:02.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8902 get rc,svc -l name=pause --no-headers' Oct 23 00:52:03.069: INFO: stderr: "No resources found in kubectl-8902 namespace.\n" Oct 23 00:52:03.069: INFO: stdout: "" Oct 23 00:52:03.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8902 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 23 00:52:03.244: INFO: stderr: "" Oct 23 00:52:03.244: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:03.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8902" for this suite. • [SLOW TEST:5.593 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":25,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:56.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:51:56.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744" in namespace "projected-7856" to be "Succeeded or Failed" Oct 23 00:51:56.652: INFO: Pod "downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470096ms Oct 23 00:51:58.656: INFO: Pod "downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006385898s Oct 23 00:52:00.659: INFO: Pod "downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009569827s Oct 23 00:52:02.663: INFO: Pod "downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013291889s Oct 23 00:52:04.667: INFO: Pod "downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01741084s STEP: Saw pod success Oct 23 00:52:04.667: INFO: Pod "downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744" satisfied condition "Succeeded or Failed" Oct 23 00:52:04.670: INFO: Trying to get logs from node node2 pod downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744 container client-container: STEP: delete the pod Oct 23 00:52:04.684: INFO: Waiting for pod downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744 to disappear Oct 23 00:52:04.687: INFO: Pod downwardapi-volume-3c11c58f-2d23-4a18-84de-2512c6ed8744 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:04.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7856" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":415,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:03.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Oct 23 00:52:03.322: INFO: Waiting up to 5m0s for pod "test-pod-6d67f0a2-a910-40dd-a7d3-e6ef2302c183" in namespace "svcaccounts-8109" to be "Succeeded or Failed" Oct 23 00:52:03.325: INFO: Pod "test-pod-6d67f0a2-a910-40dd-a7d3-e6ef2302c183": Phase="Pending", Reason="", readiness=false. Elapsed: 3.692871ms Oct 23 00:52:05.328: INFO: Pod "test-pod-6d67f0a2-a910-40dd-a7d3-e6ef2302c183": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006742626s Oct 23 00:52:07.334: INFO: Pod "test-pod-6d67f0a2-a910-40dd-a7d3-e6ef2302c183": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01193108s STEP: Saw pod success Oct 23 00:52:07.334: INFO: Pod "test-pod-6d67f0a2-a910-40dd-a7d3-e6ef2302c183" satisfied condition "Succeeded or Failed" Oct 23 00:52:07.336: INFO: Trying to get logs from node node2 pod test-pod-6d67f0a2-a910-40dd-a7d3-e6ef2302c183 container agnhost-container: STEP: delete the pod Oct 23 00:52:07.347: INFO: Waiting for pod test-pod-6d67f0a2-a910-40dd-a7d3-e6ef2302c183 to disappear Oct 23 00:52:07.350: INFO: Pod test-pod-6d67f0a2-a910-40dd-a7d3-e6ef2302c183 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:07.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8109" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":26,"skipped":407,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:04.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-d3f95510-fab7-4947-b240-2ce2ed6be82e STEP: Creating a pod to test consume configMaps Oct 23 00:52:04.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac8c6cc1-d979-4a75-9412-c6558c58abdf" in namespace "configmap-5600" to be "Succeeded or Failed" Oct 23 00:52:04.759: INFO: Pod "pod-configmaps-ac8c6cc1-d979-4a75-9412-c6558c58abdf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.146626ms Oct 23 00:52:06.762: INFO: Pod "pod-configmaps-ac8c6cc1-d979-4a75-9412-c6558c58abdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006646613s Oct 23 00:52:08.765: INFO: Pod "pod-configmaps-ac8c6cc1-d979-4a75-9412-c6558c58abdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009105141s STEP: Saw pod success Oct 23 00:52:08.765: INFO: Pod "pod-configmaps-ac8c6cc1-d979-4a75-9412-c6558c58abdf" satisfied condition "Succeeded or Failed" Oct 23 00:52:08.766: INFO: Trying to get logs from node node2 pod pod-configmaps-ac8c6cc1-d979-4a75-9412-c6558c58abdf container agnhost-container: STEP: delete the pod Oct 23 00:52:08.779: INFO: Waiting for pod pod-configmaps-ac8c6cc1-d979-4a75-9412-c6558c58abdf to disappear Oct 23 00:52:08.780: INFO: Pod pod-configmaps-ac8c6cc1-d979-4a75-9412-c6558c58abdf no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:08.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5600" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":428,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:51.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-5957 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5957 to expose endpoints map[] Oct 23 00:51:51.832: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Oct 23 00:51:52.838: INFO: successfully validated that service endpoint-test2 in namespace services-5957 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-5957 Oct 23 00:51:52.853: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:51:54.856: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:51:56.856: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:51:58.857: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:52:00.856: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:52:02.856: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5957 to expose endpoints map[pod1:[80]] Oct 23 00:52:02.866: INFO: successfully validated that service endpoint-test2 in namespace services-5957 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-5957 Oct 23 00:52:02.878: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:52:04.881: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:52:06.881: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:52:08.880: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5957 to expose endpoints map[pod1:[80] pod2:[80]] Oct 23 00:52:08.891: INFO: successfully validated that service endpoint-test2 in namespace services-5957 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-5957 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5957 to expose endpoints map[pod2:[80]] Oct 23 00:52:08.905: INFO: successfully validated that service endpoint-test2 in namespace services-5957 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-5957 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5957 to expose endpoints map[] Oct 23 00:52:08.920: INFO: successfully validated that service endpoint-test2 in namespace services-5957 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:08.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5957" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:17.140 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":28,"skipped":374,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:08.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Oct 23 00:52:09.000: INFO: created test-pod-1 Oct 23 00:52:09.009: INFO: created test-pod-2 Oct 23 00:52:09.017: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:09.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9655" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":29,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:08.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Oct 23 00:52:08.827: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f748c480-ad9c-43c0-86a1-a56b6edc721f" in namespace "projected-9161" to be "Succeeded or Failed" Oct 23 00:52:08.831: INFO: Pod "downwardapi-volume-f748c480-ad9c-43c0-86a1-a56b6edc721f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258438ms Oct 23 00:52:10.834: INFO: Pod "downwardapi-volume-f748c480-ad9c-43c0-86a1-a56b6edc721f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007136057s Oct 23 00:52:12.839: INFO: Pod "downwardapi-volume-f748c480-ad9c-43c0-86a1-a56b6edc721f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012189488s STEP: Saw pod success Oct 23 00:52:12.839: INFO: Pod "downwardapi-volume-f748c480-ad9c-43c0-86a1-a56b6edc721f" satisfied condition "Succeeded or Failed" Oct 23 00:52:12.842: INFO: Trying to get logs from node node1 pod downwardapi-volume-f748c480-ad9c-43c0-86a1-a56b6edc721f container client-container: STEP: delete the pod Oct 23 00:52:12.855: INFO: Waiting for pod downwardapi-volume-f748c480-ad9c-43c0-86a1-a56b6edc721f to disappear Oct 23 00:52:12.857: INFO: Pod downwardapi-volume-f748c480-ad9c-43c0-86a1-a56b6edc721f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:12.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9161" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:12.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 23 00:52:12.965: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:12.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5372" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":27,"skipped":469,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:13.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-afdc5c75-bd4e-48de-ba73-4d3874fce676 STEP: Creating a pod to test consume configMaps Oct 23 00:52:13.053: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b89d5d1-c557-4ee9-b678-93541bedae2c" in namespace "configmap-5512" to be "Succeeded or Failed" Oct 23 00:52:13.056: INFO: Pod "pod-configmaps-4b89d5d1-c557-4ee9-b678-93541bedae2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.867308ms Oct 23 00:52:15.061: INFO: Pod "pod-configmaps-4b89d5d1-c557-4ee9-b678-93541bedae2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007627007s Oct 23 00:52:17.065: INFO: Pod "pod-configmaps-4b89d5d1-c557-4ee9-b678-93541bedae2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012088394s STEP: Saw pod success Oct 23 00:52:17.065: INFO: Pod "pod-configmaps-4b89d5d1-c557-4ee9-b678-93541bedae2c" satisfied condition "Succeeded or Failed" Oct 23 00:52:17.068: INFO: Trying to get logs from node node1 pod pod-configmaps-4b89d5d1-c557-4ee9-b678-93541bedae2c container agnhost-container: STEP: delete the pod Oct 23 00:52:17.081: INFO: Waiting for pod pod-configmaps-4b89d5d1-c557-4ee9-b678-93541bedae2c to disappear Oct 23 00:52:17.083: INFO: Pod pod-configmaps-4b89d5d1-c557-4ee9-b678-93541bedae2c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:17.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5512" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:17.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-4612/configmap-test-7fc3ee67-013b-4bd2-b400-945d84161759 STEP: Creating a pod to test consume configMaps Oct 23 00:52:17.182: INFO: Waiting up to 5m0s for pod "pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa" in namespace "configmap-4612" to be "Succeeded or Failed" Oct 23 00:52:17.185: INFO: Pod "pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497979ms Oct 23 00:52:19.188: INFO: Pod "pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005960066s Oct 23 00:52:21.191: INFO: Pod "pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009286862s Oct 23 00:52:23.196: INFO: Pod "pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013612319s STEP: Saw pod success Oct 23 00:52:23.196: INFO: Pod "pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa" satisfied condition "Succeeded or Failed" Oct 23 00:52:23.198: INFO: Trying to get logs from node node1 pod pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa container env-test: STEP: delete the pod Oct 23 00:52:23.211: INFO: Waiting for pod pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa to disappear Oct 23 00:52:23.213: INFO: Pod pod-configmaps-21979895-bbcf-444e-b5fc-4bb8d6895dfa no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:23.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4612" for this suite. • [SLOW TEST:6.078 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":512,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:07.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5596.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5596.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5596.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5596.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:52:13.421: INFO: DNS probes using dns-test-61178494-5864-4b77-b02d-450f8eabeaa6 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5596.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5596.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5596.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5596.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:52:17.460: INFO: DNS probes using dns-test-c046926d-ac6e-4c5e-b340-f448c3b54785 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5596.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5596.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5596.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5596.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 23 00:52:23.510: INFO: DNS probes using dns-test-27c09367-7843-4d3e-b5b5-158d285972e7 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:23.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5596" for this suite. • [SLOW TEST:16.167 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":27,"skipped":410,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:55.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3480, will wait for the garbage collector to delete the pods Oct 23 00:52:07.700: INFO: Deleting Job.batch foo took: 3.964494ms Oct 23 00:52:07.800: INFO: Terminating Job.batch foo pods took: 100.359246ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:40.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3480" for this suite. • [SLOW TEST:45.205 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":30,"skipped":375,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:23.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-crfk STEP: Creating a pod to test atomic-volume-subpath Oct 23 00:52:23.273: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-crfk" in namespace "subpath-6605" to be "Succeeded or Failed" Oct 23 00:52:23.276: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.833163ms Oct 23 00:52:25.281: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007606178s Oct 23 00:52:27.287: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 4.013719623s Oct 23 00:52:29.291: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 6.017546508s Oct 23 00:52:31.296: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 8.023019348s Oct 23 00:52:33.301: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 10.028017925s Oct 23 00:52:35.305: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 12.031761717s Oct 23 00:52:37.310: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 14.036534917s Oct 23 00:52:39.314: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 16.041052432s Oct 23 00:52:41.318: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 18.044192621s Oct 23 00:52:43.323: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 20.050069747s Oct 23 00:52:45.327: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Running", Reason="", readiness=true. Elapsed: 22.053596768s Oct 23 00:52:47.333: INFO: Pod "pod-subpath-test-secret-crfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059160952s STEP: Saw pod success Oct 23 00:52:47.333: INFO: Pod "pod-subpath-test-secret-crfk" satisfied condition "Succeeded or Failed" Oct 23 00:52:47.335: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-crfk container test-container-subpath-secret-crfk: STEP: delete the pod Oct 23 00:52:47.349: INFO: Waiting for pod pod-subpath-test-secret-crfk to disappear Oct 23 00:52:47.352: INFO: Pod pod-subpath-test-secret-crfk no longer exists STEP: Deleting pod pod-subpath-test-secret-crfk Oct 23 00:52:47.352: INFO: Deleting pod "pod-subpath-test-secret-crfk" in namespace "subpath-6605" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:47.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6605" for this suite. • [SLOW TEST:24.130 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:23.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-l78g STEP: Creating a pod to test atomic-volume-subpath Oct 23 00:52:23.601: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-l78g" in namespace "subpath-8440" to be "Succeeded or Failed" Oct 23 00:52:23.604: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312082ms Oct 23 00:52:25.606: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005180745s Oct 23 00:52:27.614: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 4.012568333s Oct 23 00:52:29.617: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 6.015794611s Oct 23 00:52:31.622: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 8.021046168s Oct 23 00:52:33.626: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 10.025041359s Oct 23 00:52:35.632: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 12.031058229s Oct 23 00:52:37.636: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 14.034943427s Oct 23 00:52:39.641: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 16.039954159s Oct 23 00:52:41.645: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 18.043482234s Oct 23 00:52:43.648: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 20.04670847s Oct 23 00:52:45.651: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Running", Reason="", readiness=true. Elapsed: 22.04986154s Oct 23 00:52:47.658: INFO: Pod "pod-subpath-test-downwardapi-l78g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056659061s STEP: Saw pod success Oct 23 00:52:47.658: INFO: Pod "pod-subpath-test-downwardapi-l78g" satisfied condition "Succeeded or Failed" Oct 23 00:52:47.660: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-l78g container test-container-subpath-downwardapi-l78g: STEP: delete the pod Oct 23 00:52:47.730: INFO: Waiting for pod pod-subpath-test-downwardapi-l78g to disappear Oct 23 00:52:47.733: INFO: Pod pod-subpath-test-downwardapi-l78g no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-l78g Oct 23 00:52:47.733: INFO: Deleting pod "pod-subpath-test-downwardapi-l78g" in namespace "subpath-8440" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:47.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8440" for this suite. • [SLOW TEST:24.185 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":517,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:47.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 23 00:52:47.401: INFO: Waiting up to 5m0s for pod "pod-c3afb53e-a450-4995-b46a-c3821b998cca" in namespace "emptydir-4151" to be "Succeeded or Failed" Oct 23 00:52:47.402: INFO: Pod "pod-c3afb53e-a450-4995-b46a-c3821b998cca": Phase="Pending", Reason="", readiness=false. Elapsed: 1.782466ms Oct 23 00:52:49.407: INFO: Pod "pod-c3afb53e-a450-4995-b46a-c3821b998cca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005933372s Oct 23 00:52:51.410: INFO: Pod "pod-c3afb53e-a450-4995-b46a-c3821b998cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009048055s STEP: Saw pod success Oct 23 00:52:51.410: INFO: Pod "pod-c3afb53e-a450-4995-b46a-c3821b998cca" satisfied condition "Succeeded or Failed" Oct 23 00:52:51.413: INFO: Trying to get logs from node node1 pod pod-c3afb53e-a450-4995-b46a-c3821b998cca container test-container: STEP: delete the pod Oct 23 00:52:51.427: INFO: Waiting for pod pod-c3afb53e-a450-4995-b46a-c3821b998cca to disappear Oct 23 00:52:51.428: INFO: Pod pod-c3afb53e-a450-4995-b46a-c3821b998cca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:51.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4151" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:47.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Oct 23 00:52:53.825: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4571 PodName:pod-sharedvolume-5789c6cf-3c91-4bac-9f46-7d509ce6b371 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 00:52:53.825: INFO: >>> kubeConfig: /root/.kube/config Oct 23 00:52:53.914: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:53.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4571" for this suite. • [SLOW TEST:6.140 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":29,"skipped":448,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:51.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-10473182-b47f-4c1e-92c0-eb0bd658c807 STEP: Creating a pod to test consume secrets Oct 23 00:52:51.548: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2ef87fca-47e6-4dac-93ae-983d8faae3ff" in namespace "projected-8852" to be "Succeeded or Failed" Oct 23 00:52:51.552: INFO: Pod "pod-projected-secrets-2ef87fca-47e6-4dac-93ae-983d8faae3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310747ms Oct 23 00:52:53.555: INFO: Pod "pod-projected-secrets-2ef87fca-47e6-4dac-93ae-983d8faae3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112737s Oct 23 00:52:55.559: INFO: Pod "pod-projected-secrets-2ef87fca-47e6-4dac-93ae-983d8faae3ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010610149s STEP: Saw pod success Oct 23 00:52:55.559: INFO: Pod "pod-projected-secrets-2ef87fca-47e6-4dac-93ae-983d8faae3ff" satisfied condition "Succeeded or Failed" Oct 23 00:52:55.561: INFO: Trying to get logs from node node2 pod pod-projected-secrets-2ef87fca-47e6-4dac-93ae-983d8faae3ff container projected-secret-volume-test: STEP: delete the pod Oct 23 00:52:55.574: INFO: Waiting for pod pod-projected-secrets-2ef87fca-47e6-4dac-93ae-983d8faae3ff to disappear Oct 23 00:52:55.576: INFO: Pod pod-projected-secrets-2ef87fca-47e6-4dac-93ae-983d8faae3ff no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:55.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8852" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 23 00:52:55.624: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:53.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-baa0c34a-d662-4d51-9f04-de6d619f8f9b STEP: Creating a pod to test consume secrets Oct 23 00:52:53.969: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f75f31cb-9767-4392-9c2a-adef31fd87a9" in namespace "projected-9635" to be "Succeeded or Failed" Oct 23 00:52:53.972: INFO: Pod "pod-projected-secrets-f75f31cb-9767-4392-9c2a-adef31fd87a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.401166ms Oct 23 00:52:55.976: INFO: Pod "pod-projected-secrets-f75f31cb-9767-4392-9c2a-adef31fd87a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007370008s Oct 23 00:52:57.981: INFO: Pod "pod-projected-secrets-f75f31cb-9767-4392-9c2a-adef31fd87a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011960187s STEP: Saw pod success Oct 23 00:52:57.981: INFO: Pod "pod-projected-secrets-f75f31cb-9767-4392-9c2a-adef31fd87a9" satisfied condition "Succeeded or Failed" Oct 23 00:52:57.984: INFO: Trying to get logs from node node1 pod pod-projected-secrets-f75f31cb-9767-4392-9c2a-adef31fd87a9 container secret-volume-test: STEP: delete the pod Oct 23 00:52:58.002: INFO: Waiting for pod pod-projected-secrets-f75f31cb-9767-4392-9c2a-adef31fd87a9 to disappear Oct 23 00:52:58.004: INFO: Pod pod-projected-secrets-f75f31cb-9767-4392-9c2a-adef31fd87a9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:52:58.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9635" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":450,"failed":0} Oct 23 00:52:58.014: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:09.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-3345e768-8a4c-496a-bfa0-8f3d404037a8 in namespace container-probe-9172 Oct 23 00:52:13.161: INFO: Started pod busybox-3345e768-8a4c-496a-bfa0-8f3d404037a8 in namespace container-probe-9172 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 00:52:13.163: INFO: Initial restart count of pod busybox-3345e768-8a4c-496a-bfa0-8f3d404037a8 is 0 Oct 23 00:53:01.275: INFO: Restart count of pod container-probe-9172/busybox-3345e768-8a4c-496a-bfa0-8f3d404037a8 is now 1 (48.111853291s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:53:01.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9172" for this suite. • [SLOW TEST:52.168 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":441,"failed":0} Oct 23 00:53:01.292: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:45.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1023 00:52:25.347455 39 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:27.364: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 23 00:53:27.364: INFO: Deleting pod "simpletest.rc-2f8q5" in namespace "gc-235" Oct 23 00:53:27.371: INFO: Deleting pod "simpletest.rc-48bz2" in namespace "gc-235" Oct 23 00:53:27.377: INFO: Deleting pod "simpletest.rc-6dvgw" in namespace "gc-235" Oct 23 00:53:27.383: INFO: Deleting pod "simpletest.rc-6rrk7" in namespace "gc-235" Oct 23 00:53:27.390: INFO: Deleting pod "simpletest.rc-d6klc" in namespace "gc-235" Oct 23 00:53:27.395: INFO: Deleting pod "simpletest.rc-gzvcx" in namespace "gc-235" Oct 23 00:53:27.400: INFO: Deleting pod "simpletest.rc-jqp79" in namespace "gc-235" Oct 23 00:53:27.405: INFO: Deleting pod "simpletest.rc-l2ks8" in namespace "gc-235" Oct 23 00:53:27.410: INFO: Deleting pod "simpletest.rc-s9tpx" in namespace "gc-235" Oct 23 00:53:27.418: INFO: Deleting pod "simpletest.rc-vdjjl" in namespace "gc-235" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:53:27.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-235" for this suite. • [SLOW TEST:102.142 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":30,"skipped":683,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} Oct 23 00:53:27.431: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:08.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-2848 STEP: creating replication controller nodeport-test in namespace services-2848 I1023 00:51:08.325194 25 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2848, replica count: 2 I1023 00:51:11.376209 25 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:51:14.376709 25 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:51:17.376923 25 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:51:17.376: INFO: Creating new exec pod Oct 23 00:51:26.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Oct 23 00:51:26.680: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Oct 23 00:51:26.680: INFO: stdout: "nodeport-test-gtfw4" Oct 23 00:51:26.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.58.89 80' Oct 23 00:51:26.931: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.58.89 80\nConnection to 10.233.58.89 80 port [tcp/http] succeeded!\n" Oct 23 00:51:26.931: INFO: stdout: "" Oct 23 00:51:27.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.58.89 80' Oct 23 00:51:28.157: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.58.89 80\nConnection to 10.233.58.89 80 port [tcp/http] succeeded!\n" Oct 23 00:51:28.157: INFO: stdout: "nodeport-test-rhmfz" Oct 23 00:51:28.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:28.493: INFO: rc: 1 Oct 23 00:51:28.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:29.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:29.742: INFO: rc: 1 Oct 23 00:51:29.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:30.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:30.720: INFO: rc: 1 Oct 23 00:51:30.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:31.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:31.746: INFO: rc: 1 Oct 23 00:51:31.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:32.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:32.757: INFO: rc: 1 Oct 23 00:51:32.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:33.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:33.726: INFO: rc: 1 Oct 23 00:51:33.726: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:34.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:34.729: INFO: rc: 1 Oct 23 00:51:34.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:35.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:35.741: INFO: rc: 1 Oct 23 00:51:35.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:36.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:36.730: INFO: rc: 1 Oct 23 00:51:36.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:37.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:37.799: INFO: rc: 1 Oct 23 00:51:37.799: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:38.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:38.754: INFO: rc: 1 Oct 23 00:51:38.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:39.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:39.707: INFO: rc: 1 Oct 23 00:51:39.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:40.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:40.837: INFO: rc: 1 Oct 23 00:51:40.837: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:41.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:41.730: INFO: rc: 1 Oct 23 00:51:41.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:42.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:42.712: INFO: rc: 1 Oct 23 00:51:42.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30369 + echo hostName nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:43.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:43.746: INFO: rc: 1 Oct 23 00:51:43.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:44.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:44.730: INFO: rc: 1 Oct 23 00:51:44.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30369 + echo hostName nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:45.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:45.736: INFO: rc: 1 Oct 23 00:51:45.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:46.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:46.847: INFO: rc: 1 Oct 23 00:51:46.847: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:47.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:47.763: INFO: rc: 1 Oct 23 00:51:47.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:48.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:49.171: INFO: rc: 1 Oct 23 00:51:49.171: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:49.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:49.795: INFO: rc: 1 Oct 23 00:51:49.795: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:50.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:50.902: INFO: rc: 1 Oct 23 00:51:50.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:51.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:51.736: INFO: rc: 1 Oct 23 00:51:51.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:52.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:52.743: INFO: rc: 1 Oct 23 00:51:52.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:53.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:53.722: INFO: rc: 1 Oct 23 00:51:53.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:54.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:54.745: INFO: rc: 1 Oct 23 00:51:54.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30369 + echo hostName nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:55.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:56.055: INFO: rc: 1 Oct 23 00:51:56.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:56.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:56.760: INFO: rc: 1 Oct 23 00:51:56.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:57.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:57.836: INFO: rc: 1 Oct 23 00:51:57.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:58.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:58.769: INFO: rc: 1 Oct 23 00:51:58.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:59.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:51:59.765: INFO: rc: 1 Oct 23 00:51:59.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:00.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:01.321: INFO: rc: 1 Oct 23 00:52:01.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:01.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:02.377: INFO: rc: 1 Oct 23 00:52:02.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:02.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:02.843: INFO: rc: 1 Oct 23 00:52:02.843: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:03.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:03.990: INFO: rc: 1 Oct 23 00:52:03.990: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:04.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:05.913: INFO: rc: 1 Oct 23 00:52:05.913: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:06.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:07.376: INFO: rc: 1 Oct 23 00:52:07.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:07.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:07.778: INFO: rc: 1 Oct 23 00:52:07.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30369 + echo hostName nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:08.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:08.737: INFO: rc: 1 Oct 23 00:52:08.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:09.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:09.815: INFO: rc: 1 Oct 23 00:52:09.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:10.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:10.740: INFO: rc: 1 Oct 23 00:52:10.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:11.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:11.760: INFO: rc: 1 Oct 23 00:52:11.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:12.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:12.749: INFO: rc: 1 Oct 23 00:52:12.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30369 + echo hostName nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:13.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:13.757: INFO: rc: 1 Oct 23 00:52:13.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:14.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:14.745: INFO: rc: 1 Oct 23 00:52:14.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:15.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:15.758: INFO: rc: 1 Oct 23 00:52:15.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:16.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:16.756: INFO: rc: 1 Oct 23 00:52:16.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:17.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:17.862: INFO: rc: 1 Oct 23 00:52:17.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:18.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:18.742: INFO: rc: 1 Oct 23 00:52:18.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:19.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:20.370: INFO: rc: 1 Oct 23 00:52:20.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:20.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:20.746: INFO: rc: 1 Oct 23 00:52:20.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:21.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:21.809: INFO: rc: 1 Oct 23 00:52:21.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:22.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:22.724: INFO: rc: 1 Oct 23 00:52:22.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:23.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:23.779: INFO: rc: 1 Oct 23 00:52:23.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:24.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:24.754: INFO: rc: 1 Oct 23 00:52:24.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:25.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:25.738: INFO: rc: 1 Oct 23 00:52:25.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:26.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:26.747: INFO: rc: 1 Oct 23 00:52:26.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:27.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:27.749: INFO: rc: 1 Oct 23 00:52:27.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:28.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:28.740: INFO: rc: 1 Oct 23 00:52:28.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:29.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:29.736: INFO: rc: 1 Oct 23 00:52:29.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:30.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:30.740: INFO: rc: 1 Oct 23 00:52:30.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:31.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:31.724: INFO: rc: 1 Oct 23 00:52:31.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:32.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:32.749: INFO: rc: 1 Oct 23 00:52:32.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:33.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:33.724: INFO: rc: 1 Oct 23 00:52:33.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:34.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:34.732: INFO: rc: 1 Oct 23 00:52:34.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:35.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:35.756: INFO: rc: 1 Oct 23 00:52:35.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:36.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:36.733: INFO: rc: 1 Oct 23 00:52:36.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:37.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:37.723: INFO: rc: 1 Oct 23 00:52:37.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:38.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:38.737: INFO: rc: 1 Oct 23 00:52:38.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:39.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:39.747: INFO: rc: 1 Oct 23 00:52:39.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:40.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:40.784: INFO: rc: 1 Oct 23 00:52:40.784: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:41.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:41.743: INFO: rc: 1 Oct 23 00:52:41.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:42.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:42.772: INFO: rc: 1 Oct 23 00:52:42.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:43.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:43.742: INFO: rc: 1 Oct 23 00:52:43.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:44.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:44.752: INFO: rc: 1 Oct 23 00:52:44.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30369 + echo hostName nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:45.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:45.723: INFO: rc: 1 Oct 23 00:52:45.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:46.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:46.736: INFO: rc: 1 Oct 23 00:52:46.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:47.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:47.811: INFO: rc: 1 Oct 23 00:52:47.811: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:48.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:48.995: INFO: rc: 1 Oct 23 00:52:48.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:49.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:50.303: INFO: rc: 1 Oct 23 00:52:50.303: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:50.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:50.733: INFO: rc: 1 Oct 23 00:52:50.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:51.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:51.845: INFO: rc: 1 Oct 23 00:52:51.845: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:52.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:52.739: INFO: rc: 1 Oct 23 00:52:52.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:53.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:53.752: INFO: rc: 1 Oct 23 00:52:53.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:54.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:54.746: INFO: rc: 1 Oct 23 00:52:54.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:55.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:55.768: INFO: rc: 1 Oct 23 00:52:55.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:56.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:56.953: INFO: rc: 1 Oct 23 00:52:56.953: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:57.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:57.742: INFO: rc: 1 Oct 23 00:52:57.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:58.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:58.752: INFO: rc: 1 Oct 23 00:52:58.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:59.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:52:59.732: INFO: rc: 1 Oct 23 00:52:59.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:00.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:00.743: INFO: rc: 1 Oct 23 00:53:00.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:01.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:01.753: INFO: rc: 1 Oct 23 00:53:01.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:02.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:02.737: INFO: rc: 1 Oct 23 00:53:02.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:03.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:03.740: INFO: rc: 1 Oct 23 00:53:03.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:04.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:04.752: INFO: rc: 1 Oct 23 00:53:04.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:05.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:05.759: INFO: rc: 1 Oct 23 00:53:05.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:06.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:06.728: INFO: rc: 1 Oct 23 00:53:06.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:07.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:07.867: INFO: rc: 1 Oct 23 00:53:07.867: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:08.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:09.104: INFO: rc: 1 Oct 23 00:53:09.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:09.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:10.792: INFO: rc: 1 Oct 23 00:53:10.792: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:11.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:11.757: INFO: rc: 1 Oct 23 00:53:11.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:12.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:14.511: INFO: rc: 1 Oct 23 00:53:14.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:15.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:15.744: INFO: rc: 1 Oct 23 00:53:15.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:16.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:16.736: INFO: rc: 1 Oct 23 00:53:16.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:17.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:17.735: INFO: rc: 1 Oct 23 00:53:17.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:18.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:18.748: INFO: rc: 1 Oct 23 00:53:18.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:19.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:20.014: INFO: rc: 1 Oct 23 00:53:20.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:20.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:20.735: INFO: rc: 1 Oct 23 00:53:20.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:21.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:21.755: INFO: rc: 1 Oct 23 00:53:21.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:22.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:22.733: INFO: rc: 1 Oct 23 00:53:22.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:23.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:23.761: INFO: rc: 1 Oct 23 00:53:23.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:24.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:24.733: INFO: rc: 1 Oct 23 00:53:24.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:25.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:25.746: INFO: rc: 1 Oct 23 00:53:25.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:26.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:26.740: INFO: rc: 1 Oct 23 00:53:26.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:27.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:27.816: INFO: rc: 1 Oct 23 00:53:27.816: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:28.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:28.848: INFO: rc: 1 Oct 23 00:53:28.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:28.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369' Oct 23 00:53:29.146: INFO: rc: 1 Oct 23 00:53:29.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2848 exec execpods6v55 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30369: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30369 nc: connect to 10.10.190.207 port 30369 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:29.147: FAIL: Unexpected error: <*errors.errorString | 0xc00230a420>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30369 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30369 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001be7380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001be7380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001be7380, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2848". STEP: Found 17 events. Oct 23 00:53:29.165: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpods6v55: { } Scheduled: Successfully assigned services-2848/execpods6v55 to node1 Oct 23 00:53:29.165: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-gtfw4: { } Scheduled: Successfully assigned services-2848/nodeport-test-gtfw4 to node2 Oct 23 00:53:29.165: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-rhmfz: { } Scheduled: Successfully assigned services-2848/nodeport-test-rhmfz to node2 Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:08 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-gtfw4 Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:08 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-rhmfz Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:11 +0000 UTC - event for nodeport-test-gtfw4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:11 +0000 UTC - event for nodeport-test-gtfw4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 481.85567ms Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:11 +0000 UTC - event for nodeport-test-rhmfz: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:12 +0000 UTC - event for nodeport-test-gtfw4: {kubelet node2} Started: Started container nodeport-test Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:12 +0000 UTC - event for nodeport-test-gtfw4: {kubelet node2} Created: Created container nodeport-test Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:12 +0000 UTC - event for nodeport-test-rhmfz: {kubelet node2} Started: Started container nodeport-test Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:12 +0000 UTC - event for nodeport-test-rhmfz: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 289.574837ms Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:12 +0000 UTC - event for nodeport-test-rhmfz: {kubelet node2} Created: Created container nodeport-test Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:22 +0000 UTC - event for execpods6v55: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for execpods6v55: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.675870036s Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:24 +0000 UTC - event for execpods6v55: {kubelet node1} Started: Started container agnhost-container Oct 23 00:53:29.165: INFO: At 2021-10-23 00:51:24 +0000 UTC - event for execpods6v55: {kubelet node1} Created: Created container agnhost-container Oct 23 00:53:29.169: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:53:29.169: INFO: execpods6v55 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:17 +0000 UTC }] Oct 23 00:53:29.169: INFO: nodeport-test-gtfw4 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:08 +0000 UTC }] Oct 23 00:53:29.169: INFO: nodeport-test-rhmfz node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 00:51:08 +0000 UTC }] Oct 23 00:53:29.169: INFO: Oct 23 00:53:29.174: INFO: Logging node info for node master1 Oct 23 00:53:29.177: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 75102 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:29.178: INFO: Logging kubelet events for node master1 Oct 23 00:53:29.181: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 00:53:29.195: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.195: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:53:29.195: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.195: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 00:53:29.195: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.195: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:53:29.195: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:29.195: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:53:29.195: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:53:29.195: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.195: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:29.195: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.195: INFO: Container coredns ready: true, restart count 2 Oct 23 00:53:29.195: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.195: INFO: Container docker-registry ready: true, restart count 0 Oct 23 00:53:29.195: INFO: Container nginx ready: true, restart count 0 Oct 23 00:53:29.195: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.195: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:29.195: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:53:29.195: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.195: INFO: Container kube-scheduler ready: true, restart count 0 W1023 00:53:29.210413 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:29.280: INFO: Latency metrics for node master1 Oct 23 00:53:29.280: INFO: Logging node info for node master2 Oct 23 00:53:29.283: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 75066 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:20 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:20 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:20 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:20 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:29.283: INFO: Logging kubelet events for node master2 Oct 23 00:53:29.285: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 00:53:29.298: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.298: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:29.298: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.298: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:53:29.298: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.298: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:53:29.298: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.298: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:53:29.298: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:29.298: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:53:29.298: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:53:29.298: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.298: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:53:29.298: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.298: INFO: Container autoscaler ready: true, restart count 1 Oct 23 00:53:29.298: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.298: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:29.298: INFO: Container node-exporter ready: true, restart count 0 W1023 00:53:29.314832 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:29.378: INFO: Latency metrics for node master2 Oct 23 00:53:29.378: INFO: Logging node info for node master3 Oct 23 00:53:29.381: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 75099 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:29.381: INFO: Logging kubelet events for node master3 Oct 23 00:53:29.384: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 00:53:29.399: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.399: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:53:29.399: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.399: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:29.399: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.399: INFO: Container coredns ready: true, restart count 2 Oct 23 00:53:29.399: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:29.399: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:53:29.399: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:53:29.399: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.399: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 00:53:29.399: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.399: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:29.399: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:53:29.399: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.399: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:53:29.399: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.399: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:53:29.399: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.399: INFO: Container kube-proxy ready: true, restart count 1 W1023 00:53:29.414868 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:29.483: INFO: Latency metrics for node master3 Oct 23 00:53:29.483: INFO: Logging node info for node node1 Oct 23 00:53:29.486: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 75098 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:17:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:24 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:29.487: INFO: Logging kubelet events for node node1 Oct 23 00:53:29.490: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 00:53:29.530: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:53:29.530: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:53:29.530: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 00:53:29.530: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 00:53:29.530: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.530: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:29.530: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:53:29.530: INFO: affinity-nodeport-timeout-ggxrn started at 2021-10-23 00:51:16 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Oct 23 00:53:29.530: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:53:29.530: INFO: affinity-nodeport-timeout-g9p94 started at 2021-10-23 00:51:16 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Oct 23 00:53:29.530: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 00:53:29.530: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 00:53:29.530: INFO: Container config-reloader ready: true, restart count 0 Oct 23 00:53:29.530: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 00:53:29.530: INFO: Container grafana ready: true, restart count 0 Oct 23 00:53:29.530: INFO: Container prometheus ready: true, restart count 1 Oct 23 00:53:29.530: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:53:29.530: INFO: Container collectd ready: true, restart count 0 Oct 23 00:53:29.530: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:53:29.530: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:53:29.530: INFO: forbid-27249171-rpsp4 started at 2021-10-23 00:51:00 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container c ready: true, restart count 0 Oct 23 00:53:29.530: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.530: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:29.530: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 00:53:29.530: INFO: execpods6v55 started at 2021-10-23 00:51:17 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:53:29.530: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:53:29.530: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:29.530: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:53:29.530: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 00:53:29.530: INFO: Container discover ready: false, restart count 0 Oct 23 00:53:29.530: INFO: Container init ready: false, restart count 0 Oct 23 00:53:29.530: INFO: Container install ready: false, restart count 0 Oct 23 00:53:29.530: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.530: INFO: Container nodereport ready: true, restart count 0 Oct 23 00:53:29.530: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:53:29.530: INFO: affinity-nodeport-timeout-bvm9x started at 2021-10-23 00:51:16 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.530: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 W1023 00:53:29.547498 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:29.790: INFO: Latency metrics for node node1 Oct 23 00:53:29.790: INFO: Logging node info for node node2 Oct 23 00:53:29.793: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 75096 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:23 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:23 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:23 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:23 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:29.794: INFO: Logging kubelet events for node node2 Oct 23 00:53:29.797: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 00:53:29.812: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.812: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:53:29.812: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 00:53:29.812: INFO: Container discover ready: false, restart count 0 Oct 23 00:53:29.812: INFO: Container init ready: false, restart count 0 Oct 23 00:53:29.812: INFO: Container install ready: false, restart count 0 Oct 23 00:53:29.812: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.812: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 00:53:29.812: INFO: nodeport-test-gtfw4 started at 2021-10-23 00:51:08 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.812: INFO: Container nodeport-test ready: true, restart count 0 Oct 23 00:53:29.812: INFO: nodeport-test-rhmfz started at 2021-10-23 00:51:08 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.812: INFO: Container nodeport-test ready: true, restart count 0 Oct 23 00:53:29.812: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.812: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:29.812: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.812: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:53:29.812: INFO: ss2-0 started at 2021-10-23 00:52:40 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.812: INFO: Container webserver ready: false, restart count 0 Oct 23 00:53:29.812: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.812: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:53:29.812: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.812: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:29.812: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:53:29.813: INFO: ss2-2 started at 2021-10-23 00:53:14 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.813: INFO: Container webserver ready: true, restart count 0 Oct 23 00:53:29.813: INFO: ss2-1 started at 2021-10-23 00:53:18 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.813: INFO: Container webserver ready: true, restart count 0 Oct 23 00:53:29.813: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:29.813: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:53:29.813: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 00:53:29.813: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.813: INFO: Container tas-extender ready: true, restart count 0 Oct 23 00:53:29.813: INFO: execpod-affinity8k6b4 started at 2021-10-23 00:51:28 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.813: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 00:53:29.813: INFO: var-expansion-c43d7a4d-8cc6-41c5-9e4c-c48133982575 started at 2021-10-23 00:51:52 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.813: INFO: Container dapi-container ready: false, restart count 0 Oct 23 00:53:29.813: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:53:29.813: INFO: Container collectd ready: true, restart count 0 Oct 23 00:53:29.813: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:53:29.813: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:53:29.813: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:29.813: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:53:29.813: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:29.813: INFO: Container nodereport ready: true, restart count 1 Oct 23 00:53:29.813: INFO: Container reconcile ready: true, restart count 0 W1023 00:53:29.829069 25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:30.099: INFO: Latency metrics for node node2 Oct 23 00:53:30.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2848" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [141.819 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:53:29.147: Unexpected error: <*errors.errorString | 0xc00230a420>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30369 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30369 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":36,"skipped":590,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} Oct 23 00:53:30.120: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:07.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1712 Oct 23 00:51:07.999: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:51:10.002: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:51:12.002: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:51:14.002: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Oct 23 00:51:16.003: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Oct 23 00:51:16.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 23 00:51:16.248: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Oct 23 00:51:16.248: INFO: stdout: "iptables" Oct 23 00:51:16.248: INFO: proxyMode: iptables Oct 23 00:51:16.255: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 23 00:51:16.257: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1712 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1712 I1023 00:51:16.269046 31 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1712, replica count: 3 I1023 00:51:19.319504 31 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:51:22.320501 31 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:51:25.321581 31 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 00:51:28.323552 31 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 00:51:28.334: INFO: Creating new exec pod Oct 23 00:51:33.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Oct 23 00:51:33.709: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Oct 23 00:51:33.709: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:51:33.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.15.227 80' Oct 23 00:51:33.996: INFO: stderr: "+ nc -v -t -w 2 10.233.15.227 80\n+ echo hostName\nConnection to 10.233.15.227 80 port [tcp/http] succeeded!\n" Oct 23 00:51:33.996: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Oct 23 00:51:33.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:34.331: INFO: rc: 1 Oct 23 00:51:34.331: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:35.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:35.557: INFO: rc: 1 Oct 23 00:51:35.557: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:36.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:36.599: INFO: rc: 1 Oct 23 00:51:36.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:37.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:37.569: INFO: rc: 1 Oct 23 00:51:37.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:38.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:38.621: INFO: rc: 1 Oct 23 00:51:38.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:39.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:39.546: INFO: rc: 1 Oct 23 00:51:39.546: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:40.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:40.572: INFO: rc: 1 Oct 23 00:51:40.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:41.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:41.600: INFO: rc: 1 Oct 23 00:51:41.600: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:42.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:42.825: INFO: rc: 1 Oct 23 00:51:42.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:43.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:43.604: INFO: rc: 1 Oct 23 00:51:43.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:44.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:44.827: INFO: rc: 1 Oct 23 00:51:44.827: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:45.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:45.566: INFO: rc: 1 Oct 23 00:51:45.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:46.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:46.916: INFO: rc: 1 Oct 23 00:51:46.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:47.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:48.325: INFO: rc: 1 Oct 23 00:51:48.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:48.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:50.165: INFO: rc: 1 Oct 23 00:51:50.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:50.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:50.706: INFO: rc: 1 Oct 23 00:51:50.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:51.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:51.637: INFO: rc: 1 Oct 23 00:51:51.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:52.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:52.989: INFO: rc: 1 Oct 23 00:51:52.989: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:53.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:53.794: INFO: rc: 1 Oct 23 00:51:53.794: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:54.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:54.811: INFO: rc: 1 Oct 23 00:51:54.811: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:55.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:56.082: INFO: rc: 1 Oct 23 00:51:56.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:56.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:56.570: INFO: rc: 1 Oct 23 00:51:56.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:57.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:57.595: INFO: rc: 1 Oct 23 00:51:57.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:58.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:58.961: INFO: rc: 1 Oct 23 00:51:58.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:51:59.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:51:59.843: INFO: rc: 1 Oct 23 00:51:59.844: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:00.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:00.617: INFO: rc: 1 Oct 23 00:52:00.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:01.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:01.579: INFO: rc: 1 Oct 23 00:52:01.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:02.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:02.577: INFO: rc: 1 Oct 23 00:52:02.577: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:03.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:03.596: INFO: rc: 1 Oct 23 00:52:03.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:04.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:04.615: INFO: rc: 1 Oct 23 00:52:04.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:05.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:05.689: INFO: rc: 1 Oct 23 00:52:05.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:06.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:06.768: INFO: rc: 1 Oct 23 00:52:06.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:07.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:07.639: INFO: rc: 1 Oct 23 00:52:07.639: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:08.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:08.687: INFO: rc: 1 Oct 23 00:52:08.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:09.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:09.696: INFO: rc: 1 Oct 23 00:52:09.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:10.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:10.732: INFO: rc: 1 Oct 23 00:52:10.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:11.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:11.594: INFO: rc: 1 Oct 23 00:52:11.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:12.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:12.563: INFO: rc: 1 Oct 23 00:52:12.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:13.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:13.596: INFO: rc: 1 Oct 23 00:52:13.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:14.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:14.848: INFO: rc: 1 Oct 23 00:52:14.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:15.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:15.664: INFO: rc: 1 Oct 23 00:52:15.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:16.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:16.582: INFO: rc: 1 Oct 23 00:52:16.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:17.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:17.641: INFO: rc: 1 Oct 23 00:52:17.641: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:18.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:20.018: INFO: rc: 1 Oct 23 00:52:20.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:20.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:20.634: INFO: rc: 1 Oct 23 00:52:20.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:21.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:21.587: INFO: rc: 1 Oct 23 00:52:21.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:22.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:22.565: INFO: rc: 1 Oct 23 00:52:22.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:23.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:23.618: INFO: rc: 1 Oct 23 00:52:23.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:24.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:25.057: INFO: rc: 1 Oct 23 00:52:25.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:25.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:25.596: INFO: rc: 1 Oct 23 00:52:25.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:26.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:26.574: INFO: rc: 1 Oct 23 00:52:26.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31861 nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:27.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:27.591: INFO: rc: 1 Oct 23 00:52:27.591: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:28.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:28.580: INFO: rc: 1 Oct 23 00:52:28.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:29.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:29.579: INFO: rc: 1 Oct 23 00:52:29.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:30.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:30.584: INFO: rc: 1 Oct 23 00:52:30.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:31.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:31.571: INFO: rc: 1 Oct 23 00:52:31.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:32.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:32.571: INFO: rc: 1 Oct 23 00:52:32.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:33.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:33.572: INFO: rc: 1 Oct 23 00:52:33.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:34.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:34.566: INFO: rc: 1 Oct 23 00:52:34.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:35.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:35.586: INFO: rc: 1 Oct 23 00:52:35.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:36.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:36.873: INFO: rc: 1 Oct 23 00:52:36.873: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:37.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:37.593: INFO: rc: 1 Oct 23 00:52:37.593: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:38.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:38.691: INFO: rc: 1 Oct 23 00:52:38.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:39.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:39.578: INFO: rc: 1 Oct 23 00:52:39.578: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31861 nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:40.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:40.599: INFO: rc: 1 Oct 23 00:52:40.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:41.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:41.571: INFO: rc: 1 Oct 23 00:52:41.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:42.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:42.613: INFO: rc: 1 Oct 23 00:52:42.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:43.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:43.592: INFO: rc: 1 Oct 23 00:52:43.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:44.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:44.638: INFO: rc: 1 Oct 23 00:52:44.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:45.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:45.593: INFO: rc: 1 Oct 23 00:52:45.593: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:46.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:46.650: INFO: rc: 1 Oct 23 00:52:46.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:47.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:48.082: INFO: rc: 1 Oct 23 00:52:48.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:48.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:48.808: INFO: rc: 1 Oct 23 00:52:48.808: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:49.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:50.163: INFO: rc: 1 Oct 23 00:52:50.163: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:50.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:50.588: INFO: rc: 1 Oct 23 00:52:50.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:51.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:51.940: INFO: rc: 1 Oct 23 00:52:51.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:52.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:52.855: INFO: rc: 1 Oct 23 00:52:52.855: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:53.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:53.595: INFO: rc: 1 Oct 23 00:52:53.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:54.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:54.571: INFO: rc: 1 Oct 23 00:52:54.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:55.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:55.568: INFO: rc: 1 Oct 23 00:52:55.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:56.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:56.572: INFO: rc: 1 Oct 23 00:52:56.572: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31861 nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:57.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:57.579: INFO: rc: 1 Oct 23 00:52:57.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:58.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:58.588: INFO: rc: 1 Oct 23 00:52:58.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:52:59.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:52:59.619: INFO: rc: 1 Oct 23 00:52:59.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:00.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:00.575: INFO: rc: 1 Oct 23 00:53:00.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:01.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:01.634: INFO: rc: 1 Oct 23 00:53:01.634: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:02.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:02.581: INFO: rc: 1 Oct 23 00:53:02.581: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:03.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:03.690: INFO: rc: 1 Oct 23 00:53:03.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:04.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:04.608: INFO: rc: 1 Oct 23 00:53:04.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:05.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:05.570: INFO: rc: 1 Oct 23 00:53:05.570: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:06.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:06.609: INFO: rc: 1 Oct 23 00:53:06.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:07.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:07.608: INFO: rc: 1 Oct 23 00:53:07.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:08.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:08.565: INFO: rc: 1 Oct 23 00:53:08.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:09.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:09.598: INFO: rc: 1 Oct 23 00:53:09.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:10.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:10.575: INFO: rc: 1 Oct 23 00:53:10.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:11.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:11.587: INFO: rc: 1 Oct 23 00:53:11.587: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:12.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:12.709: INFO: rc: 1 Oct 23 00:53:12.710: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:13.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:13.576: INFO: rc: 1 Oct 23 00:53:13.576: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:14.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:14.567: INFO: rc: 1 Oct 23 00:53:14.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:15.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:15.839: INFO: rc: 1 Oct 23 00:53:15.839: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:16.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:16.580: INFO: rc: 1 Oct 23 00:53:16.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:17.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:17.574: INFO: rc: 1 Oct 23 00:53:17.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:18.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:18.590: INFO: rc: 1 Oct 23 00:53:18.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:19.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:19.992: INFO: rc: 1 Oct 23 00:53:19.992: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:20.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:21.086: INFO: rc: 1 Oct 23 00:53:21.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:21.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:21.597: INFO: rc: 1 Oct 23 00:53:21.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:22.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:22.696: INFO: rc: 1 Oct 23 00:53:22.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31861 nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:23.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:23.563: INFO: rc: 1 Oct 23 00:53:23.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31861 nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:24.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:24.573: INFO: rc: 1 Oct 23 00:53:24.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:25.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:25.563: INFO: rc: 1 Oct 23 00:53:25.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:26.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:26.575: INFO: rc: 1 Oct 23 00:53:26.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:27.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:27.713: INFO: rc: 1 Oct 23 00:53:27.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:28.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:28.815: INFO: rc: 1 Oct 23 00:53:28.815: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:29.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:29.741: INFO: rc: 1 Oct 23 00:53:29.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:30.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:30.871: INFO: rc: 1 Oct 23 00:53:30.871: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:31.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:31.592: INFO: rc: 1 Oct 23 00:53:31.592: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:32.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:32.563: INFO: rc: 1 Oct 23 00:53:32.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:33.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:33.585: INFO: rc: 1 Oct 23 00:53:33.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:34.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:34.626: INFO: rc: 1 Oct 23 00:53:34.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:34.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861' Oct 23 00:53:34.971: INFO: rc: 1 Oct 23 00:53:34.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1712 exec execpod-affinity8k6b4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31861: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31861 + echo hostName nc: connect to 10.10.190.207 port 31861 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... Oct 23 00:53:34.971: FAIL: Unexpected error: <*errors.errorString | 0xc004dce310>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31861 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31861 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc001914000, 0x779f8f8, 0xc001082840, 0xc001502000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001701680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001701680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001701680, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 Oct 23 00:53:34.973: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1712, will wait for the garbage collector to delete the pods Oct 23 00:53:35.040: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 3.53821ms Oct 23 00:53:35.141: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.407268ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-1712". STEP: Found 33 events. Oct 23 00:53:43.958: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-bvm9x: { } Scheduled: Successfully assigned services-1712/affinity-nodeport-timeout-bvm9x to node1 Oct 23 00:53:43.958: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-g9p94: { } Scheduled: Successfully assigned services-1712/affinity-nodeport-timeout-g9p94 to node1 Oct 23 00:53:43.958: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-ggxrn: { } Scheduled: Successfully assigned services-1712/affinity-nodeport-timeout-ggxrn to node1 Oct 23 00:53:43.958: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinity8k6b4: { } Scheduled: Successfully assigned services-1712/execpod-affinity8k6b4 to node2 Oct 23 00:53:43.958: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-1712/kube-proxy-mode-detector to node2 Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:09 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 317.116342ms Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:09 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:10 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:11 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:16 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-bvm9x Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:16 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-ggxrn Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:16 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-g9p94 Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:16 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:20 +0000 UTC - event for affinity-nodeport-timeout-bvm9x: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:20 +0000 UTC - event for affinity-nodeport-timeout-g9p94: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:21 +0000 UTC - event for affinity-nodeport-timeout-ggxrn: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-bvm9x: {kubelet node1} Started: Started container affinity-nodeport-timeout Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-bvm9x: {kubelet node1} Created: Created container affinity-nodeport-timeout Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-bvm9x: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.528029306s Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-g9p94: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.678659832s Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-g9p94: {kubelet node1} Started: Started container affinity-nodeport-timeout Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-g9p94: {kubelet node1} Created: Created container affinity-nodeport-timeout Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-ggxrn: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.475977671s Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-ggxrn: {kubelet node1} Created: Created container affinity-nodeport-timeout Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:23 +0000 UTC - event for affinity-nodeport-timeout-ggxrn: {kubelet node1} Started: Started container affinity-nodeport-timeout Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:30 +0000 UTC - event for execpod-affinity8k6b4: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 363.769516ms Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:30 +0000 UTC - event for execpod-affinity8k6b4: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:31 +0000 UTC - event for execpod-affinity8k6b4: {kubelet node2} Started: Started container agnhost-container Oct 23 00:53:43.958: INFO: At 2021-10-23 00:51:31 +0000 UTC - event for execpod-affinity8k6b4: {kubelet node2} Created: Created container agnhost-container Oct 23 00:53:43.958: INFO: At 2021-10-23 00:53:34 +0000 UTC - event for execpod-affinity8k6b4: {kubelet node2} Killing: Stopping container agnhost-container Oct 23 00:53:43.958: INFO: At 2021-10-23 00:53:35 +0000 UTC - event for affinity-nodeport-timeout-bvm9x: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Oct 23 00:53:43.958: INFO: At 2021-10-23 00:53:35 +0000 UTC - event for affinity-nodeport-timeout-g9p94: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Oct 23 00:53:43.958: INFO: At 2021-10-23 00:53:35 +0000 UTC - event for affinity-nodeport-timeout-ggxrn: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout Oct 23 00:53:43.960: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 00:53:43.960: INFO: Oct 23 00:53:43.965: INFO: Logging node info for node master1 Oct 23 00:53:43.967: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 75254 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:35 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:35 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:35 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:35 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:43.968: INFO: Logging kubelet events for node master1 Oct 23 00:53:43.970: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 00:53:43.989: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:43.989: INFO: Container docker-registry ready: true, restart count 0 Oct 23 00:53:43.989: INFO: Container nginx ready: true, restart count 0 Oct 23 00:53:43.989: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:43.989: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:43.989: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:53:43.989: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:43.989: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:53:43.989: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:43.989: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 00:53:43.989: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:43.989: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:53:43.989: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:43.989: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:53:43.989: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:53:43.989: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:43.989: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:43.989: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:43.989: INFO: Container coredns ready: true, restart count 2 Oct 23 00:53:43.989: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:43.989: INFO: Container kube-scheduler ready: true, restart count 0 W1023 00:53:44.003418 31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:44.070: INFO: Latency metrics for node master1 Oct 23 00:53:44.070: INFO: Logging node info for node master2 Oct 23 00:53:44.073: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 75330 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:41 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:41 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:41 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:41 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:44.073: INFO: Logging kubelet events for node master2 Oct 23 00:53:44.075: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 00:53:44.084: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.084: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:44.084: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.084: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:53:44.084: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.084: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:53:44.084: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.084: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:53:44.084: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:44.084: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:53:44.084: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:53:44.084: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.084: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:53:44.084: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.084: INFO: Container autoscaler ready: true, restart count 1 Oct 23 00:53:44.084: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:44.084: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:44.084: INFO: Container node-exporter ready: true, restart count 0 W1023 00:53:44.097835 31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:44.164: INFO: Latency metrics for node master2 Oct 23 00:53:44.164: INFO: Logging node info for node master3 Oct 23 00:53:44.168: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 75246 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:34 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:34 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:34 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:34 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:44.168: INFO: Logging kubelet events for node master3 Oct 23 00:53:44.170: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 00:53:44.179: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.179: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 00:53:44.179: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:44.180: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:44.180: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:53:44.180: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.180: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 00:53:44.180: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.180: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 00:53:44.180: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.180: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 00:53:44.180: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:44.180: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:53:44.180: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 00:53:44.180: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.180: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 00:53:44.180: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.180: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:44.180: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.180: INFO: Container coredns ready: true, restart count 2 W1023 00:53:44.195061 31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:44.263: INFO: Latency metrics for node master3 Oct 23 00:53:44.263: INFO: Logging node info for node node1 Oct 23 00:53:44.265: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 75245 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:17:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:34 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:34 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:34 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:34 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:44.266: INFO: Logging kubelet events for node node1 Oct 23 00:53:44.268: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 00:53:44.283: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 00:53:44.283: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 00:53:44.283: INFO: Container config-reloader ready: true, restart count 0 Oct 23 00:53:44.283: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 00:53:44.283: INFO: Container grafana ready: true, restart count 0 Oct 23 00:53:44.283: INFO: Container prometheus ready: true, restart count 1 Oct 23 00:53:44.283: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:53:44.283: INFO: Container collectd ready: true, restart count 0 Oct 23 00:53:44.283: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:53:44.283: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:53:44.283: INFO: forbid-27249171-rpsp4 started at 2021-10-23 00:51:00 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Container c ready: true, restart count 0 Oct 23 00:53:44.283: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:44.283: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:44.283: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 00:53:44.283: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:53:44.283: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:44.283: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:53:44.283: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 00:53:44.283: INFO: Container discover ready: false, restart count 0 Oct 23 00:53:44.283: INFO: Container init ready: false, restart count 0 Oct 23 00:53:44.283: INFO: Container install ready: false, restart count 0 Oct 23 00:53:44.283: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:44.283: INFO: Container nodereport ready: true, restart count 0 Oct 23 00:53:44.283: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:53:44.283: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:53:44.283: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Init container install-cni ready: true, restart count 2 Oct 23 00:53:44.283: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 00:53:44.283: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 00:53:44.283: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:44.283: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:44.283: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:53:44.283: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.283: INFO: Container kube-sriovdp ready: true, restart count 0 W1023 00:53:44.297339 31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:44.451: INFO: Latency metrics for node node1 Oct 23 00:53:44.451: INFO: Logging node info for node node2 Oct 23 00:53:44.455: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 75357 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-22 21:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:44 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:44 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 00:53:44 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 00:53:44 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 00:53:44.456: INFO: Logging kubelet events for node node2 Oct 23 00:53:44.459: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 00:53:44.472: INFO: var-expansion-c43d7a4d-8cc6-41c5-9e4c-c48133982575 started at 2021-10-23 00:51:52 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.472: INFO: Container dapi-container ready: false, restart count 0 Oct 23 00:53:44.472: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 00:53:44.472: INFO: Init container install-cni ready: true, restart count 1 Oct 23 00:53:44.472: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 00:53:44.472: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.472: INFO: Container tas-extender ready: true, restart count 0 Oct 23 00:53:44.472: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 00:53:44.472: INFO: Container collectd ready: true, restart count 0 Oct 23 00:53:44.472: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 00:53:44.472: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 00:53:44.472: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.472: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 00:53:44.472: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:44.472: INFO: Container nodereport ready: true, restart count 1 Oct 23 00:53:44.472: INFO: Container reconcile ready: true, restart count 0 Oct 23 00:53:44.472: INFO: ss2-0 started at 2021-10-23 00:53:34 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.472: INFO: Container webserver ready: true, restart count 0 Oct 23 00:53:44.472: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.472: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 00:53:44.472: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 00:53:44.472: INFO: Container discover ready: false, restart count 0 Oct 23 00:53:44.472: INFO: Container init ready: false, restart count 0 Oct 23 00:53:44.472: INFO: Container install ready: false, restart count 0 Oct 23 00:53:44.472: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.472: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 00:53:44.472: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.473: INFO: Container kube-multus ready: true, restart count 1 Oct 23 00:53:44.473: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.473: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 00:53:44.473: INFO: ss2-1 started at 2021-10-23 00:53:18 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.473: INFO: Container webserver ready: false, restart count 0 Oct 23 00:53:44.473: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.473: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 00:53:44.473: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 00:53:44.473: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 00:53:44.473: INFO: Container node-exporter ready: true, restart count 0 Oct 23 00:53:44.473: INFO: ss2-2 started at 2021-10-23 00:53:14 +0000 UTC (0+1 container statuses recorded) Oct 23 00:53:44.473: INFO: Container webserver ready: true, restart count 0 W1023 00:53:44.494641 31 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 00:53:44.752: INFO: Latency metrics for node node2 Oct 23 00:53:44.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1712" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [156.798 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 00:53:34.971: Unexpected error: <*errors.errorString | 0xc004dce310>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31861 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31861 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":39,"skipped":741,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Oct 23 00:53:44.768: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:49:08.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 00:49:08.472590 28 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:54:08.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7140" for this suite. • [SLOW TEST:300.052 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:51:52.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Oct 23 00:53:52.686: INFO: Successfully updated pod "var-expansion-c43d7a4d-8cc6-41c5-9e4c-c48133982575" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 23 00:53:54.693: INFO: Deleting pod "var-expansion-c43d7a4d-8cc6-41c5-9e4c-c48133982575" in namespace "var-expansion-9285" Oct 23 00:53:54.699: INFO: Wait up to 5m0s for pod "var-expansion-c43d7a4d-8cc6-41c5-9e4c-c48133982575" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:54:28.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9285" for this suite. • [SLOW TEST:156.579 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":18,"skipped":420,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} Oct 23 00:54:28.714: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:52:40.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5625 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Oct 23 00:52:40.879: INFO: Found 0 stateful pods, waiting for 3 Oct 23 00:52:50.884: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:52:50.884: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:52:50.884: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 23 00:52:50.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5625 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:52:51.172: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:52:51.172: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:52:51.172: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Oct 23 00:53:01.205: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 23 00:53:11.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5625 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:53:11.449: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 00:53:11.449: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:53:11.449: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:53:31.467: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update Oct 23 00:53:31.467: INFO: Waiting for Pod statefulset-5625/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Rolling back to a previous revision Oct 23 00:53:41.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5625 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 23 00:53:41.727: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Oct 23 00:53:41.727: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 23 00:53:41.727: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 23 00:53:51.755: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 23 00:54:01.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5625 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 23 00:54:02.003: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Oct 23 00:54:02.003: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 23 00:54:02.003: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 23 00:54:12.088: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update Oct 23 00:54:12.088: INFO: Waiting for Pod statefulset-5625/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 23 00:54:12.088: INFO: Waiting for Pod statefulset-5625/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 23 00:54:12.088: INFO: Waiting for Pod statefulset-5625/ss2-2 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 23 00:54:22.120: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update Oct 23 00:54:22.120: INFO: Waiting for Pod statefulset-5625/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 23 00:54:22.120: INFO: Waiting for Pod statefulset-5625/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Oct 23 00:54:32.096: INFO: Waiting for StatefulSet statefulset-5625/ss2 to complete update Oct 23 00:54:32.096: INFO: Waiting for Pod statefulset-5625/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Oct 23 00:54:42.095: INFO: Deleting all statefulset in ns statefulset-5625 Oct 23 00:54:42.098: INFO: Scaling statefulset ss2 to 0 Oct 23 00:55:12.121: INFO: Waiting for statefulset status.replicas updated to 0 Oct 23 00:55:12.123: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:55:12.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5625" for this suite. • [SLOW TEST:151.296 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":31,"skipped":389,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} Oct 23 00:55:12.145: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 00:50:24.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W1023 00:50:25.019597 30 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 00:56:01.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-7765" for this suite. • [SLOW TEST:336.059 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":29,"skipped":362,"failed":0} Oct 23 00:56:01.056: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":39,"skipped":786,"failed":0} Oct 23 00:54:08.503: INFO: Running AfterSuite actions on all nodes Oct 23 00:56:01.127: INFO: Running AfterSuite actions on node 1 Oct 23 00:56:01.127: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493 Ran 320 of 5770 Specs in 850.952 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5450 Skipped Ginkgo ran 1 suite in 14m12.550764747s Test Suite Failed